Color Image Processing and Applications

35 downloads 0 Views 4MB Size Report
Feb 18, 2000 - ter 8) covers emerging applications of color image processing. color is ...... color-imaging devices exploit the trichromatic theory of color to ...
K.N. Plataniotis and A.N. Venetsanopoulos

Color Image Processing and Applications Engineering { Monograph (English)

February 18, 2000

Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest

Preface

The perception of color is of paramount importance to humans since they routinely use color features to sense the environment, recognize objects and convey information. Color image processing and analysis is concerned with the manipulation of digital color images on a computer utilizing digital signal processing techniques. Like most advanced signal processing techniques, it was, until recently, con ned to academic institutions and research laboratories that could a ord the expensive image processing hardware needed to handle the processing overhead required to process large numbers of color images. However, with the advent of powerful desktop computers and the proliferation of image collection devices, such as digital cameras and scanners, color image processing techniques are now within the grasp of the general public. This book is aimed at researchers and practitioners that work in the area of color image processing. Its purpose is to ll an existing gap in scienti c literature by presenting the state of the art research in the area. It is written at a level which can be easily understood by a graduate student in an Electrical and Computer Engineering or Computer Science program. Therefore, it can be used as a textbook that covers part of a modern graduate course in digital image processing or multimedia systems. It can also be used as a textbook for a graduate course on digital signal processing since it contains algorithms, design criteria and architectures for processing and analysis systems. The book is structured into four parts. The rst, Chapter 1, deals with color principles and is aimed at readers who have very little prior knowledge of color science. Readers interested in color image processing may read the second part of the book (Chapters 2-5). It covers the major, although somewhat mature, elds of color image processing. Color image processing is characterized by a large number of algorithms that are speci c solutions to speci c problems, for example vector median lters have been developed to remove impulsive noise from images. Some of them are mathematical or content independent operations that are applied to each and every pixel, such as morphological operators. Others are algorithmic in nature, in the sense that a recursive strategy may be necessary to nd edge pixels in an image.

The third part of the book, Chapters 6-7, deals with color image analysis and coding techniques. The ultimate goal of color image analysis is to enhance human-computer interaction. Recent applications of image analysis includes compression of color images either for transmission across the internetwork or coding of video images for video conferencing. Finally, the fourth part (Chapter 8) covers emerging applications of color image processing. Color is useful for accessing multimedia databases. Local color information, for example in the form of color histograms, can be used to index and retrieve images from the database. Color features can also be used to identify objects of interest, such as human faces and hand areas, for applications ranging from video conferencing, to perceptual interfaces and virtual environments. Because of the dual nature of this investigation, processing and analysis, the logical dependence of the chapters is somewhat unusual. The following diagram can help the reader chart the course.

Logical dependence between chapters

IX Acknowledgment

We acknowledge a number of individuals who have contributed in di erent ways to the preparation of this book. In particular, we wish to extend our appreciation to Prof. M. Zervakis for contributing the image restoration section, and to Dr. N. Herodotou for his informative inputs and valuable suggestions in the emerging applications chapter. Three graduate students of ours also merit special thanks. Shu Yu Zhu for her input and high quality gures included in the color edge detection chapter, Ido Rabinovitch for his contribution to the color image coding section and Nicolaos Ikonomakis for his valuable contribution in the color segmentation chapter. We also thank Nicolaos for reviewing the chapters of the book and helping with the Latex formating of the manuscript. We also grateful to Terri Vlassopoulos for proofreading the manuscript, and Frank Holzwarth of Springer Verlag for his help during the preparation of the book. Finally, we are indebted to Peter Androutsos who helped us tremendously on the development of the companion software.

X

Contents

1. Color Spaces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.1 1.2 1.3 1.4 1.5

1.6 1.7 1.8 1.9

1.10 1.11 1.12 1.13 1.14

Basics of Color Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The CIE Chromaticity-based Models . . . . . . . . . . . . . . . . . . . . . . The CIE-RGB Color Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gamma Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear and Non-linear RGB Color Spaces . . . . . . . . . . . . . . . . . . 1.5.1 Linear RGB Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Non-linear RGB Color Space . . . . . . . . . . . . . . . . . . . . . . . Color Spaces Linearly Related to the RGB . . . . . . . . . . . . . . . . . The YIQ Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The HSI Family of Color Models . . . . . . . . . . . . . . . . . . . . . . . . . Perceptually Uniform Color Spaces . . . . . . . . . . . . . . . . . . . . . . . 1.9.1 The CIE L u v Color Space . . . . . . . . . . . . . . . . . . . . . . 1.9.2 The CIE L a b Color Space . . . . . . . . . . . . . . . . . . . . . . 1.9.3 Cylindrical L u v and L a b Color Space . . . . . . . . . . 1.9.4 Applications of L u v and L a b spaces . . . . . . . . . . . The Munsell Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Opponent Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 4 9 13 16 16 17 20 23 25 32 33 35 37 37 39 41 42 45 45

2. Color Image Filtering : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 51 2.1 2.2 2.3 2.4 2.5

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modeling Sensor Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modeling Transmission Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multivariate Data Ordering Schemes . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Marginal Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Conditional Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Partial Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Reduced Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 A Practical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Vector Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 52 53 55 58 59 62 62 63 67 69

XII

2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15

The Distance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Similarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filters Based on Marginal Ordering . . . . . . . . . . . . . . . . . . . . . . . Filters Based on Reduced Ordering . . . . . . . . . . . . . . . . . . . . . . . Filters Based on Vector Ordering . . . . . . . . . . . . . . . . . . . . . . . . . Directional-based Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70 72 77 81 89 92 98 100

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 107 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.2 The Adaptive Fuzzy System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.2.1 Determining the Parameters . . . . . . . . . . . . . . . . . . . . . . . 112 3.2.2 The Membership Function . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.2.3 The Generalized Membership Function . . . . . . . . . . . . . . 115 3.2.4 Members of the Adaptive Fuzzy Filter Family . . . . . . . . 116 3.2.5 A Combined Fuzzy Directional and Fuzzy Median Filter122 3.2.6 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 3.2.7 Application to 1-D Signals . . . . . . . . . . . . . . . . . . . . . . . . . 128 3.3 The Bayesian Parametric Approach . . . . . . . . . . . . . . . . . . . . . . . 131 3.4 The Non-parametric Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3.5 Adaptive Morphological Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.5.2 Computation of the NOP and the NCP . . . . . . . . . . . . . 152 3.5.3 Computational Complexity and Fast Algorithms . . . . . 154 3.6 Simulation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

3. Adaptive Image Filters

4. Color Edge Detection : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 179

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 4.2 Overview Of Color Edge Detection Methodology . . . . . . . . . . . 181 4.2.1 Techniques Extended From Monochrome Edge Detection181 4.2.2 Vector Space Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 183 4.3 Vector Order Statistic Edge Operators . . . . . . . . . . . . . . . . . . . . 189 4.4 Di erence Vector Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 4.5 Evaluation Procedures and Results . . . . . . . . . . . . . . . . . . . . . . . 197 4.5.1 Probabilistic Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 4.5.2 Noise Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 4.5.3 Subjective Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

XIII

5. Color Image Enhancement and Restoration : : : : : : : : : : : : : : : 209 5.1 5.2 5.3 5.4 5.5

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Image Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restoration Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Algorithm Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 De nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Direct Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Robust Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

209 210 214 217 220 220 223 227 229

6. Color Image Segmentation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 237 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Pixel-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Histogram Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Region-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Split and Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Edge-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Model-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 The Maximum A-posteriori Method . . . . . . . . . . . . . . . . 6.5.2 The Adaptive MAP Method . . . . . . . . . . . . . . . . . . . . . . . 6.6 Physics-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Hybrid Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Pixel Classi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Seed Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.3 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.4 Region Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7. Color Image Compression 7.1 7.2 7.3 7.4

237 239 239 242 247 248 250 252 253 254 255 256 257 260 260 262 267 269 271 273

: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 279

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Compression Comparison Terminology . . . . . . . . . . . . . . Image Representation for Compression Applications . . . . . . . . Lossless Waveform-based Image Compression Techniques . . . . 7.4.1 Entropy Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Lossless Compression Using Spatial Redundancy . . . . . 7.5 Lossy Waveform-based Image Compression Techniques . . . . . . 7.5.1 Spatial Domain Methodologies . . . . . . . . . . . . . . . . . . . . . 7.5.2 Transform Domain Methodologies . . . . . . . . . . . . . . . . . . 7.6 Second Generation Image Compression Techniques . . . . . . . . . 7.7 Perceptually Motivated Compression Techniques . . . . . . . . . . .

279 282 285 286 286 288 290 290 292 304 307

XIV

7.7.1 Modeling the Human Visual System . . . . . . . . . . . . . . . . 7.7.2 Perceptually Motivated DCT Image Coding . . . . . . . . . 7.7.3 Perceptually Motivated Wavelet-based Coding . . . . . . . 7.7.4 Perceptually Motivated Region-based Coding . . . . . . . . 7.8 Color Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

307 311 313 317 319 324

8. Emerging Applications : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 329 8.1 Input Analysis Using Color Information . . . . . . . . . . . . . . . . . . . 8.2 Shape and Color Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Fuzzy Membership Functions . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Aggregation Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A. Companion Image Processing Software A.1 A.2 A.3 A.4

331 337 338 340 343 345

: : : : : : : : : : : : : : : : : : : 349

Image Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Noise Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

350 350 351 351

Index : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 353

List of Figures

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16

The visible light spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The CIE XYZ color matching functions . . . . . . . . . . . . . . . . . . . . . . . The CIE RGB color matching functions . . . . . . . . . . . . . . . . . . . . . . . The chromaticity diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Maxwell triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The RGB color model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear to Non-linear Light Transformation . . . . . . . . . . . . . . . . . . . . . Non-linear to linear Light Transformation . . . . . . . . . . . . . . . . . . . . . Transformation of Intensities from Image Capture to Image Display The HSI Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The HLS Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The HSV Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The L u v Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Munsell color system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Opponent color stage of the human visual system. . . . . . . . . . . A taxonomy of color models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 7 7 9 10 11 18 19 19 26 31 31 34 40 42 46

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10

Simulation I: Filter outputs (1st component) . . . . . . . . . . . . . . . . . . . Simulation I: Filter outputs (2nd component) . . . . . . . . . . . . . . . . . . Simulation II: Actual signal and noisy input (1st component) . . . . Simulation II: Actual signal and noisy input (2nd component) . . . . Simulation II: Filter outputs (1st component) . . . . . . . . . . . . . . . . . . Simulation II: Filter outputs (2nd component) . . . . . . . . . . . . . . . . . . A owchart of the NOP research algorithm . . . . . . . . . . . . . . . . . . . . The adaptive morphological lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . `Peppers' corrupted by 4% impulsive noise . . . . . . . . . . . . . . . . . . . . `Lenna' corrupted with Gaussian noise  = 15 mixed with 2% impulsive noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V MF of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BV DF of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . HF of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AHF of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FV DF of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . ANNMF of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . CANNMF of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . .

129 129 130 131 132 132 155 157 169

3.11 3.12 3.13 3.14 3.15 3.16 3.17

169 170 170 170 170 170 170 170

XVI

3.18 BFMA of (3.9) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.19 V MF of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.20 BV DF of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.21 HF of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.22 AHF of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.23 FV DF of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.24 ANNMF of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . 3.25 CANNMF of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . 3.26 BFMA of (3.10) using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . 3.27 `Mandrill' - 10% impulsive noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.28 NOP-NCP ltering results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.29 V MF using 3x3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.30 Mutistage Close-opening ltering results . . . . . . . . . . . . . . . . . . . . . . .

170 171 171 171 171 171 171 171 171 173 173 173 173

4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17

180 195 202 202 202 203 203 203 203 204 204 204 204 205 205 205 205

Edge detection by derivative operators . . . . . . . . . . . . . . . . . . . . . . . . Sub-window Con gurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test color image `ellipse' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test color image ` ower' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test color image `Lenna' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `ellipse': Sobel detector . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `ellipse': VR detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `ellipse': DV detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `ellipse': DV hv detector . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of ` ower': Sobel detector . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of ` ower': VR detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of ` ower': DV detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of ` ower': DVadap detector . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `Lenna': Sobel detector . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `Lenna': VR detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `Lenna': DV detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edge map of `Lenna': DVadap detector . . . . . . . . . . . . . . . . . . . . . . . .

5.1 The original color image `mountain' . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.2 The histogram equalized color output . . . . . . . . . . . . . . . . . . . . . . . . . 215 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10

Partitioned image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Corresponding quad-tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The HSI cone with achromatic region in yellow . . . . . . . . . . . . . . . . . Original image. Achromatic pixels: intensity < 10, > 90 . . . . . . . . . Saturation < 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Saturation < 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Saturation< 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original image. Achromatic pixels: saturation < 10, intensity> 90 Intensity < 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intensity < 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

250 250 261 262 262 262 262 263 263 263

XVII

6.11 6.12 6.13 6.14 6.15 6.16 6.17 6.18 6.19 6.20 6.21 6.22 6.23 6.24 6.25 6.26 6.27 6.28 6.29 6.30 6.31 6.32 6.33 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15

Intensity < 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original image. Achromatic pixels: saturation< 10, intensity< 10 . Intensity > 85 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intensity > 90 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intensity > 95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pixel classi cation with chromatic pixels in red and achromatic pixels in the original color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pixel classi cation with chromatic pixels in tan and achromatic pixels in the original color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arti cial image with level 1, 2, and 3 seeds. . . . . . . . . . . . . . . . . . . . . The region growing algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original 'Claire' image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 'Claire' image showing seeds with V AR = 0:2 . . . . . . . . . . . . . . . . . . Segmented 'Claire' image (before merging), Tchrom = 0:15 . . . . . . . Segmented 'Claire' image (after merging), Tchrom = 0:15 and Tmerge = 0:2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original 'Carphone' image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 'Carphone' image showing seeds with V AR = 0:2 . . . . . . . . . . . . . . . Segmented 'Carphone' image (before merging), Tchrom = 0:15 . . . . Segmented 'Carphone' image (after merging), Tchrom = 0:15 and Tmerge = 0:2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original 'Mother-Daughter' image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 'Mother-Daughter' image showing seeds with V AR = 0:2 . . . . . . . . Segmented 'Mother-Daughter' image (before merging), Tchrom = 0:15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Segmented 'Mother-Daughter' image (after merging), Tchrom = 0:15 and Tmerge = 0:2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

263 264 264 264 264 265

The zig-zag scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DCT based coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Original color image `Peppers' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image coded at a compression ratio 5 : 1 . . . . . . . . . . . . . . . . . . . . . . . Image coded at a compression ratio 6 : 1 . . . . . . . . . . . . . . . . . . . . . . . Image coded at a compression ratio 6:3 : 1 . . . . . . . . . . . . . . . . . . . . . Image coded at a compression ratio 6:35 : 1 . . . . . . . . . . . . . . . . . . . . Image coded at a compression ratio 6:75 : 1 . . . . . . . . . . . . . . . . . . . . Subband coding scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relationship between di erent scale subspaces . . . . . . . . . . . . . . . . . . Multiresolution analysis decomposition . . . . . . . . . . . . . . . . . . . . . . . . The wavelet-based scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Second generation coding schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . The human visual system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overall operation of the processing module . . . . . . . . . . . . . . . . . . . .

297 298 299 299 299 299 299 299 301 302 303 304 304 307 318

265 265 265 266 267 270 270 270 270 271 271 271 271 272 272 272 272

XVIII

7.16 MPEG-1: Coding module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 7.17 MPEG-1: Decoding module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12

Skin and Lip Clusters in the RGB color space . . . . . . . . . . . . . . . . . . Skin and Lip Clusters in the L a b color space . . . . . . . . . . . . . . . . Skin and Lip hue Distributions in the HSV color space . . . . . . . . . . Overall scheme to extract the facial regions within a scene . . . . . . . Template for hair color classi cation = R1 + R2 + R3 . . . . . . . . . . . Carphone: Frame 80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Segmented frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frames 20-95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miss America: Frame 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frames 20-120 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Akiyo: Frame 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frames 20-110 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

333 333 334 337 342 344 344 344 345 345 345 345

A.1 A.2 A.3 A.4

Screenshot of the main CIPAView window at startup. . . . . . . . . . . . Screenshot of Di erence Vector Mean edge detector being applied Gray scale image quantized to 4 levels . . . . . . . . . . . . . . . . . . . . . . . . . Screenshot of an image being corrupted by Impulsive Noise.. . . . . .

350 351 352 352

List of Tables

1.1 EBU Tech 3213 Primaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 EBU Tech 3213 Primaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3 Color Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.1 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14

Noise Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filters Compared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subjective Image Evaluation Guidelines . . . . . . . . . . . . . . . . . . . . . . . Figure of Merit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NMSE(x10 2 ) for the RGB `Lenna' image, 33 window . . . . . . . . . NMSE(x10 2 ) for the RGB `Lenna' image, 55 window . . . . . . . . . NMSE(x10 2 ) for the RGB `peppers' image, 33 window . . . . . . . NMSE(x10 2 ) for the RGB `peppers' image, 55 window . . . . . . . NCD for the RGB `Lenna' image, 33 window . . . . . . . . . . . . . . . . . NCD for the RGB `Lenna' image, 55 window . . . . . . . . . . . . . . . . . NCD for the RGB `peppers' image, 33 window . . . . . . . . . . . . . . . . NCD for the RGB `peppers' image, 55 window . . . . . . . . . . . . . . . . Subjective Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance measures for the image Mandrill . . . . . . . . . . . . . . . . . .

158 159 161 162 164 165 165 166 166 167 167 168 168 172

4.1 4.2 4.3 4.4

Vector Order Statistic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Di erence Vector Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical Evaluation with Synthetic Images . . . . . . . . . . . . . . . . . . . Noise Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

198 199 199 201

6.1 Comparison of Chromatic Distance Measures . . . . . . . . . . . . . . . . . . 269 6.2 Color Image Segmentation Techniques . . . . . . . . . . . . . . . . . . . . . . . . 273 7.1 Storage requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 7.2 A taxonomy of image compression methodologies: First Generation283 7.3 A taxonomy of image compression methodologies: Second Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 7.4 Quantization table for the luminance component . . . . . . . . . . . . . . . 296 7.5 Quantization table for the chrominance components . . . . . . . . . . . . 296

XX

7.6 The JPEG suggested quantization table . . . . . . . . . . . . . . . . . . . . . . . 312 7.7 Quantization matrix based on the contrast sensitivity function for 1.0 min/pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 8.1 Miss America (WidthHeight=360288):Shape & Color Analysis. 343

1. Color Spaces

1.1 Basics of Color Vision

Color is a sensation created in response to excitation of our visual system by electromagnetic radiation known as light [1], [2], [3]. More speci c, color is the perceptual result of light in the visible region of the electromagnetic spectrum, having wavelengths in the region of 400nm to 700nm, incident upon the retina of the human eye. Physical power or radiance of the incident light is in a spectral power distribution (SPD), often divided into 31 components each representing a 10nm band [4]-[13].

1.1. The visible light spectrum Fig.

The human retina has three types of color photo-receptor cells, called cones , which respond to radiation with somewhat di erent spectral response curves [4]-[5]. A fourth type of photo-receptor cells, called roads , are also present in the retina. These are e ective only at extremely low light levels, for example during night vision. Although rods are important for vision, they play no role in image reproduction [14], [15]. The branch of color science concerned with the appropriate description and speci cation of a color is called colorimetry [5], [10]. Since there are exactly three types of color photo-receptor cone cells, three numerical components are necessary and suÆcient to describe a color, providing that appropriate spectral weighting functions are used. Therefore, a color can be speci ed by a tri-component vector. The set of all colors form a vector space called color space or color model. The three components of a color can be de ned in many di erent ways leading to various color spaces [5], [9]. Before proceeding with color speci cation systems (color spaces), it is appropriate to de ne a few terms: Intensity (usually denoted I ), brightness

2

(Br), luminance (Y ), lightness (L ), hue (H ) and saturation (S ), which are often confused or misused in the literature. The intensity (I ) is a measure, over some interval of the electromagnetic spectrum, of the ow of power that is radiated from, or incident on a surface and expressed in units of watts per square meter [4], [18], [16]. The intensity (I ) is often called a linear light measure and thus is expressed in units, such as watts per square meter [4], [5]. The brightness (Br) is de ned as the attribute of a visual sensation according to which an area appears to emit more or less light [5]. Since brightness perception is very complex, the Commission Internationale de L'Eclairage (CIE) de ned another quantity luminance (Y ) which is radiant power weighted by a spectral sensitivity function that is characteristic of human vision [5]. Human vision has a nonlinear perceptual response to luminance which is called lightness (L ). The nonlinearity is roughly logarithmic [4]. Humans interpret a color based on its lightness (L ), hue (H ) and saturation (S ) [5]. Hue is a color attribute associated with the dominant wavelength in a mixture of light waves. Thus hue represents the dominant color as perceived by an observer; when an object is said to be red, orange, or yellow the hue is being speci ed. In other words, it is the attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors: red, yellow, green and blue, or a combination of two of them [4], [5]. Saturation refers to the relative purity or the amount of white light mixed with a hue. The pure spectrum colors are fully saturated and contain no white light. Colors such as pink (red and white) and lavender (violet and white) are less saturated, with the degree of saturation being inversely proportional to the amount of white light added [1]. A color can be de-saturated by adding white light that contains power at all wavelengths [4]. Hue and saturation together describe the chrominance. The perception of color is basically determined by luminance and chrominance [1]. To utilize color as a visual cue in multimedia, image processing, graphics and computer vision applications, an appropriate method for representing the color signal is needed. The di erent color speci cation systems or color models (color spaces or solids) address this need. Color spaces provide a rational method to specify, order, manipulate and e ectively display the object colors taken into consideration. A well chosen representation preserves essential information and provides insight to the visual operation needed. Thus, the selected color model should be well suited to address the problem's statement and solution. The process of selecting the best color representation involves knowing how color signals are generated and what information is needed from these signals. Although color spaces impose constraints on color perception and representation they also help humans perform important tasks. In particular, the color models may be used to de ne colors, discriminate between colors, judge similarity between color and identify color categories for a number of applications [12], [13].

3

Color model literature can be found in the domain of modern sciences, such as physics, engineering, arti cial intelligence, computer science, psychology and philosophy. In the literature four basic color model families can be distinguished [14]: 1. Colorimetric color models, which are based on physical measurements of spectral re ectance. Three primary color lters and a photo-meter, such as the CIE chromaticity diagram usually serve as the initial points for such models. 2. Psychophysical color models, which are based on the human perception of color. Such models are either based on subjective observation criteria and comparative references (e.g. Munsell color model) or are built through experimentation to comply with the human perception of color (e.g. Hue, Saturation and Lightness model). 3. Physiologically inspired color models, which are based on the three primaries, the three types of cones in the human retina. The Red-GreenBlue (RGB) color space used in computer hardware is the best known example of a physiologically inspired color model. 4. Opponent color models, which are based on perception experiments, utilizing mainly pairwise opponent primary colors, such as the YellowBlue and Red-Green color pairs. In image processing applications, color models can alternatively be divided into three categories. Namely: 1. Device-oriented color models, which are associated with input, processing and output signal devices. Such spaces are of paramount importance in modern applications, where there is a need to specify color in a way that is compatible with the hardware tools used to provide, manipulate or receive the color signals. 2. User-oriented color models, which are utilized as a bridge between the human operators and the hardware used to manipulate the color information. Such models allow the user to specify color in terms of perceptual attributes and they can be considered an experimental approximation of the human perception of color. 3. Device-independent color models, which are used to specify color signals independently of the characteristics of a given device or application. Such models are of importance in applications, where color comparisons and transmission of visual information over networks connecting di erent hardware platforms are required. In 1931, the Commission Internationale de L'Eclairage (CIE) adopted standard color curves for a hypothetical standard observer. These color curves specify how a speci c spectral power distribution (SPD) of an external stimulus (visible radiant light incident on the eye) can be transformed into a set of three numbers that specify the color. The CIE color speci cation system

4

is based on the description of color as the luminance component Y and two additional components X and Z [5]. The spectral weighting curves of X and Z have been standardized by the CIE based on statistics from experiments involving human observers [5]. The CIE XYZ tristimulus values can be used to describe any color. The corresponding color space is called the CIE XYZ color space. The XYZ model is a device independent color space that is useful in applications where consistent color representation across devices with di erent characteristics is important. Thus, it is exceptionally useful for color management purposes. The CIE XYZ space is perceptually highly non uniform [4]. Therefore, it is not appropriate for quantitative manipulations involving color perception and is seldom used in image processing applications [4], [10]. Traditionally, color images have been speci ed by the non-linear red (R0 ), green (G0 ) and blue (B 0 ) tristimulus values where color image storage, processing and analysis is done in this non-linear RGB (R0 G0 B0 ) color space. The red, green and blue components are called the primary colors . In general, hardware devices such as video cameras, color image scanners and computer monitors process the color information based on these primary colors. Other popular color spaces in image processing are the YIQ (North American TV standard), the HSI (Hue, Saturation and Intensity), and the HSV (Hue, Saturation, Value) color spaces used in computer graphics. Although XYZ is used only indirectly it has a signi cant role in image processing since other color spaces can be derived from it through mathematical transforms. For example, the linear RGB color space can be transformed to and from the CIE XYZ color space using a simple linear three-by-three matrix transform. Similarly, other color spaces, such as non-linear RGB, YIQ and HSI can be transformed to and from the CIE XYZ space, but might require complex and non-linear computations. The CIE have also derived and standardized two other color spaces, called L u v and L a b , from the CIE XYZ color space which are perceptually uniform [5]. The rest of this chapter is devoted to the analysis of the di erent color spaces in use today. The di erent color representation models are discussed and analyzed in detail with emphasis placed on motivation and design characteristics. 1.2 The CIE Chromaticity-based Models

Over the years, the CIE committee has sponsored the research of color perception. This has lead to a class of widely used mathematical color models. The derivation of these models has been based on a number of color matching experiments, where an observer judges whether two parts of a visual stimulus match in appearance. Since the colorimetry experiments are based on a matching procedure in which the human observer judges the visual similarity of two areas the theoretical model predicts only matching and not perceived

5

colors. Through these experiments it was found that light of almost any spectral composition can be matched by mixtures of only three primaries (lights of a single wavelength). The CIE had de ned a number of standard observer color matching functions by compiling experiments with di erent observers, di erent light sources and with various power and spectral compositions. Based on the experiments performed by CIE early in this century, it was determined that these three primary colors can be broadly chosen, provided that they are independent. The CIE's experimental matching laws allow for the representation of colors as vectors in a three-dimensional space de ned by the three primary colors. In this way, changes between color spaces can be accomplished easily. The next few paragraphs will brie y outline how such a task can be accomplished. According to experiments conducted by Thomas Young in the nineteenth century [19], and later validated by other researchers [20], there are three di erent types of cones in the human retina, each with di erent absorption spectra: S1 (), S2 (), S3 (), where 380780 (nm). These approximately peak in the yellow-green, green and blue regions of the electromagnetic spectrum with signi cant overlap between S1 and S2 . For each wavelength the absorption spectra provides the weight with which light of a given spectral distribution (SPD) contributes to the cone's output. Based on Young's theory, the color sensation that is produced by a light having SPD C () can be de ned as:

i (C ) =

Z 2 1

Si ()C () d

(1.1)

for i = 1; 2; 3. According to (1.1) any two colors C1 (), C2 () such that i (C1 ) = i (C2 ) , i = 1; 2; 3 will be perceived to be identical even if C1() and C2 () are di erent. This well known phenomenon of spectrally di erent

stimuli that are indistinguishable to a human observer is called metamers [14] and constitutes a rather dramatic illustration of the perceptual nature of color and the limitations of the color modeling process. Assume that three primary colors Ck , k = 1; 2; 3 with SPD Ck () are available and let Z

Ck () d = 1 (1.2) To match a color C with spectral energy distribution C (), the three primaries are mixed in proportions of k , k = 1; 2; 3. Their linear combination P3 k=1 k Ck () should be perceived as C (). Substituting this into (1.1) leads to:

i (C ) = for i = 1; 2; 3.

Z

(

3 X k=1

k Ck ())Si () d =

3 X k=1

Z

k Si ()Ck () d

(1.3)

6 R

The quantity Si ()Ck () d can be interpreted as the ith , cone response generated by one unit of the k th primary color: Z

i;k = i (Ck ) = Si()Ck () d

i = 1; 2; 3 (1.4)

Therefore, the color matching equations are: 3 X k=1

Z

k i;k = i (C ) = Si ()C () d

(1.5)

assuming a certain set of primary colors Ck () and spectral sensitivity curves Si (). For a given arbitrary color, k can be found by simply solving (1.4) and (1.5). Following the same approach wk can be de ned as the amount of the kth primary required to match the reference white, providing that there is available a reference white light source with known energy distribution w(). In such a case, the values obtained through

Tk (C ) = w k (1.6) k for k = 1; 2; 3 are called tristimulus values of the color C , and determine the relative amounts of primitives required to match that color. The tristimulus values of any given color C () can be obtained given the spectral tristimulus values Tk (), which are de ned as the tristimulus values of unit energy spectral color at wavelength  . The spectral tristimulus Tk () provide the so-called spectral matching curves which are obtained by setting C () = Æ( ) in (1.5).

The spectral matching curves for a particular choice of color primaries with an approximately red, green and blue appearance were de ned in the CIE 1931 standard [9]. A set of pure monochromatic primaries are used, blue (435:8nm), green (546:1nm) and red (700nm). In Figures 1.2 and 1.3 the Yaxis indicates the relative amount of each primary needed to match a stimulus of the wavelength reported on the X-axis. It can be seen that some of the values are negative. Negative numbers require that the primary in question be added to the opposite side of the original stimulus. Since negative sources are not physically realizable it can be concluded that the arbitrary set of three primary sources cannot match all the visible colors. However, for any given color a suitable set of three primary colors can be found. Based on the assumption that the human visual system behaves linearly, the CIE had de ned spectral matching curves in terms of virtual primaries. This constitutes a linear transformation such that the spectral matching curves are all positive and thus immediately applicable for a range of practical situations. The end results are referred to as the CIE 1931 standard observer matching curves and the individual curves (functions) are labeled

7

x, y, z respectively. In the CIE 1931 standard the matching curves were selected so that y was proportional to the human luminosity function, which was an experimentally determined measure of the perceived brightness of monochromatic light. CIE 1964 XYZ color matching functions 2.5

2

1.5

1

0.5

0 0

50

100

150

200 250 300 Wavelength, nm

350

The CIE XYZ color matching functions

400

450

500

Fig. 1.2.

400

450

500

Fig. 1.3.

Color Matching Functions −:r −−: g

Tristimulus value

−.: b

1

0 0

50

100

150

200 250 300 Wavelength, nm

350

The CIE RGB color matching functions

If the spectral energy distribution C () of a stimulus is given, then the chromaticity coordinates can be determined in two stages. First, the tristimulus values X , Y , Z are calculated as follows:

8 Z

X = x()C () d Y

Z

(1.7)

y()C () d

(1.8)

Z = z()C () d

(1.9)

=

Z

The new set of primaries must satisfy the following conditions: 1. The XYZ components for all visible colors should be non-negative. 2. Two of the primaries should have zero luminance. 3. As many spectral colors as possible should have at least one zero XYZ component. Secondly, normalized tristimulus values, called chromaticity coordinates, are calculated based on the primaries as follows:

x = X + XY + Z (1.10) y = X + YY + Z (1.11) (1.12) z = X + ZY + Z Clearly z = 1 (x + y ) and hence only two coordinates are necessary to describe a color match. Therefore, the chromaticity coordinates project the 3 D color solid on a plane, and they are usually plotted as a parametric x y plot with z implicitly evaluated as z = 1 (x + y ). This diagram is known as the chromaticity diagram and has a number of interesting properties that are used extensively in image processing. In particular,

1. The chromaticity coordinates (x; y ) jointly represent the chrominance components of a given color. 2. The entire color space can be represented by the coordinates (x; y; T ), in which T = constant is a given chrominance plane. 3. The chromaticity diagram represents every physically realizable color as a point within a well de ned boundary. The boundary represents the primary sources. The boundary vertices have coordinates de ned by the chromaticities of the primaries. 4. A white point is located in the center of the chromaticity diagram. More saturated colors radiate outwards from white. Complementary pure colors can easily be determined from the diagram. 5. In the chromaticity diagram, the color perception obtained through the superposition of light coming from two di erent sources, lies on a straight line between the points representing the component lights in the diagram.

9

6. Since the chromaticity diagram reveals the range of all colors which can be produced by means of the three primaries (gamut), it can be used to guide the selection of primaries subject to design constraints and technical speci cations. 7. The chromaticity diagram can be utilized to determine the hue and saturation of a given color since it represents chrominance by eliminating luminance. Based on the initial objectives set out by CIE, two of the primaries, X and Z , have zero luminance while the primary Y is the luminance indicator determined by the light-eÆciency function V () at the spectral matching curve y. Thus, in the chromaticity diagram the dominant wavelength (hue) can be de ned as the intersection between a line drawn from the reference white through the given color to the boundaries of the diagram. Once the hue has been determined, then the purity wc of the line segments of a given color can be found as the ratio r = wp that connect the reference white with the color (wc) to the line segment between the reference white and the dominant wavelength/hue (wp).

Fig. 1.4.

gram

The chromaticity dia-

1.3 The CIE-RGB Color Model

The fundamental assumption behind modern colorimetry theory, as it applies to image processing tasks, is that the initial basis for color vision lies in the di erent excitation of three classes of photo-receptor cones in the retina. These include the red, green and blue receptors, which de ne a trichromatic

10

space whose basis of primaries are pure colors in the short, medium and high portions of the visible spectrum [4], [5], [10]. As a result of the assumed linear nature of light, and due to the principle of superposition, the colors of a mixture are a function of the primaries and the fraction of each primary that is mixed. Throughout this analysis, the primaries need not be known, just their tristimulus values. This principle is called additive reproduction. It is employed in image and video devices used today where the color spectra from red, green and blue light beams are physically summed at the surface of the projection screen. Direct view color CRT's (cathode ray tube) also utilize additive reproduction. In particular, the CRT's screen consists of small dots which produce red, green and blue light. When the screen is viewed from a distance the spectra of these dots add up in the retina of the observer. In practice, it is possible to reproduce a large number of colors by additive reproduction using the three primaries: red, green and blue. The colors that result from additive reproduction are completely determined by the three primaries. The video projectors and the color CRT's in use today utilize a color space collectively known under the name RGB, which is based on the red, green and blue primaries and a white reference point. To uniquely specify a color space based on the three primary colors the chromaticity values of each primary color and a white reference point need to be speci ed. The gamut of colors which can be mixed from the set of the RGB primaries is given in the (x; y ) chromaticity diagram by a triangle whose vertices are the chromaticities of the primaries (Maxwell triangle) [5], [20]. This is shown in Figure 1.5. P 3

1

C

C’

1 P

2

1

P 1

Fig. 1.5.

triangle

The Maxwell

11

Blue(0,0,B)

Cyan(0,G,B)

White (R,G,B)

Magenta (R,0,B)

Grey-scale line

Green(0,G,0) Black(0,0,0)

Red(R,0,0)

Yellow(R,G,0)

Fig. 1.6.

model

The RGB color

In the red, green and blue system the color solid generated is a bounded subset of the space generated by each primary. Using an appropriate scale along each primary axis, the space can normalized, so that the maximum is 1. Therefore, as can be seen in Figure 1.6 the RGB color solid is a cube, called the RGB cube. The origin of the cube, de ned as (0; 0; 0) corresponds to black and the point with coordinates (1; 1; 1) corresponds to the system's brightest white. In image processing, computer graphics and multimedia systems the RGB representation is the most often used. A digital color image is represented by a two dimensional array of three variate vectors which are comprised of the pixel's red, green and blue values. However, these pixel values are relative to the three primary colors which form the color space. As it was mentioned earlier, to uniquely de ne a color space, the chromaticities of the three primary colors and the reference white must be speci ed. If these are not speci ed within the chromaticity diagram, the pixel values which are used in the digital representation of the color image are meaningless [16]. In practice, although a number of RGB space variants have been de ned and are in use today, their exact speci cations are usually not available to the end-user. Multimedia users assume that all digital images are represented in the same RGB space and thus use, compare or manipulate them directly no matter where these images are from. If a color digital image is represented in the RGB system and no information about its chromaticity characteristics is available, the user cannot accurately reproduce or manipulate the image. Although in computing and multimedia systems there are no standard primaries or white point chromaticities, a number of color space standards

12

have been de ned and used in the television industry. Among them are the Federal Communication Commission of America (FCC) 1953 primaries, the Society of Motion Picture and Television Engineers (SMPTE) `C' primaries, the European Broadcasting Union (EBU) primaries and the ITU-R BT.709 standard (formerly known as CCIR Rec. 709) [24]. Most of these standards use a white reference point known as CIE D65 but other reference points, such as the CIE illuminant E are also be used [4]. In additive color mixtures the white point is de ned as the one with equal red, green and blue components. However, there is no unique physical or perceptual de nition of white, so the characteristics of the white reference point should be de ned prior to its utilization in the color space de nition. In the CIE illuminant E, or equal-energy illuminant, white is de ned as the point whose spectral power distribution is uniform throughout the visible spectrum. A more realistic reference white, which approximates daylight has been speci ed numerically by the CIE as illuminant D65. The D65 reference white is the one most often used for color interchange and the reference point used throughout this work. The appropriate red, green and blue chromaticities are determined by the technology employed, such as the sensors in the cameras, the phosphors within the CTR's and the illuminants used. The standards are an attempt to quantify the industry's practice. For example, in the FCC-NTSC standard, the set of primaries and speci ed white reference point were representative of the phosphors used in color CRTs of a certain era. Although the sensor technology has changed over the years in response to market demands for brighter television receivers, the standards remain the same. To alleviate this problem, the European Broadcasting Union (EBU) has established a new standard (EBU Tech 3213). It is de ned in Table 1.1. Table 1.1.

EBU Tech 3213 Primaries

Colorimetry x y z

Red 0.640 0.330 0.030

Green 0.290 0.600 0.110

Blue 0.150 0.060 0.790

White D65 0.3127 0.3290 0.3582

Recently, an international agreement has nally been reached on the primaries for the High De nition Television (HDTV) speci cation. These primaries are representative of contemporary monitors in computing, computer graphics and studio video production. The standard is known as ITU-R BT.709 and its primaries along with the D65 reference white are de ned in Table 1.2. The di erent RGB systems can be converted amongst each other using a linear transformation assuming that the white references values being used are known. As an example, if it is assumed that the D65 is used in both

13 Table 1.2.

EBU Tech 3213 Primaries

Colorimetry x y z

Red 0.640 0.330 0.030

Green 0.300 0.600 0.100

Blue 0.150 0.060 0.790

White D65 0.3127 0.3290 0.3582

systems, then the conversion between the ITU-R BT.709 and SMPTE `C' primaries is de ned by the following matrix transformation: 2

R709 3 2 0:939555 0:050173 0:010272 3 2 Rc 3 4 G709 5 = 4 0:017775 0:9655795 0:016430 5 4 Gc 5 (1.13) B709 0:001622 0:004371 1:005993 Bc where R709 , G709 , B709 are the linear red, green and blue components of the ITU-R BT.709 and Rc , Gc , Bc are the linear components in the SMPTE `C'

system. The conversion should be carried out in the linear voltage domain, where the pixel values must rst be converted into linear voltages. This is achieved by applying the so-called gamma correction. 1.4 Gamma Correction

In image processing, computer graphics, digital video and photography, the symbol represents a numerical parameter which describes the nonlinearity of the intensity reproduction. The cathode-ray tube (CRT) employed in modern computing systems is nonlinear in the sense that the intensity of light reproduced at the screen of a CRT monitor is a nonlinear function of the voltage input. A CRT has a power law response to applied voltage. The light intensity produced on the display is proportional to the applied voltage raised to a power denoted by [4], [16], [17]. Thus, the produced intensity by the CRT and the voltage applied on the CRT have the following relationship:

Iint = (v0 )

(1.14)

The relationship which is called the ` ve-halves' power law is dictated by the physics of the CRT electron gun. The above function applies to a single electron gun of a gray-scale CRT or each of the three red, green and blue electron guns of a color CRT. The functions associated with the three guns on a color CRT are very similar to each other but not necessarily identical. The actual value of for a particular CRT may range from about 2.3 to 2.6 although most practitioners frequently claim values lower than 2.2 for video monitors. The process of pre-computing for the nonlinearity by computing a voltage signal from an intensity value is called gamma correction. The function required is approximately a 0:45 power function. In image processing applications, gamma correction is accomplished by analog circuits at the camera.

14

In computer graphics, gamma correction is usually accomplished by incorporating the function into a frame bu er lookup table. Although in image processing systems gamma was originally used to refer to the nonlinearity of the CRT, it is generalized to refer to the nonlinearity of an entire image processing system. The value of an image or an image processing system can be calculated by multiplying the 's of its individual components from the image capture stage to the display. The model used in (1.14) can cause wide variability in the value of gamma mainly due to the black level errors since it forces the zero voltage to map to zero intensity for any value of gamma. A slightly di erent model can be used in order to resolve the black level error. The modi ed model is given as:

Iint = (voltage + )2:5

(1.15)

By xing the exponent of the power function at 2.5 and using the single parameter to accommodate black level errors the modi ed model ts the observed nonlinearity much better than the variable gamma model in (1.14). The voltage-to-intensity function de ned in (1.15) is nearly the inverse of the luminance-to-brightness relationship of human vision. Human vision de nes luminance as a weighted mixture of the spectral energy where the weights are determined by the characteristics of the human retina. The CIE has standardized a weighting function which relates spectral power to luminance. In this standardized function, the perceived luminance by humans relates to the physical luminance (proportional to intensity) by the following equation: (

1

Y 3 L = 116( Yn )

16 if Y if 903:3( Yn ) 1 3

Y Yn Y Yn

> 0:008856 0:008856

(1.16)

where Yn is the luminance of the reference white, usually normalized either to 1.0 or 100. Thus, the lightness perceived by humans is, approximately, the cubic root of the luminance. The lightness sensation can be computed as intensity raised, approximately to the third power. Thus, the entire image processing system can be considered linear or almost linear. To compensate for the nonlinearity of the display (CRT), gamma correction with a power of ( 1 ) can be used so that the overall system is approximately 1. In a video system, the gamma correction is applied to the camera for precomputing the nonlinearity of the display. The gamma correction performs the following transfer function:

voltage0 = (voltage) 1

(1.17)

where voltage is the voltage generated by the camera sensors. The gamma corrected value is the reciprocal of the gamma resulting in a transfer function with unit power exponent.

15

To achieve subjectively pleasing images, the end-to-end power function of the overall imaging system should be around 1.1 or 1.2 instead of the mathematically correct linear system. The REC 709 speci es a power exponent of 0.45 at the camera which, in conjunction with the 2.5 exponent at the display, results in an overall exponent value of about 1.13. If the value is greater than 1, the image appears sharper but the scene contrast range, which can be reproduced, is reduced. On the other hand, reducing the value has a tendency to make the image appear soft and washed out. For color images, the linear values R; G, and B values should be converted into nonlinear voltages R0 , G0 and B 0 through the application of the gamma correction process. The color CRT will then convert R0 , G0 and B 0 into linear red, green and blue light to reproduce the original color. The ITU-R BT.709 standard recommends a gamma exponent value of 0.45 for the High De nition Television. In practical systems, such as TV cameras, certain modi cations are required to ensure proper operation near the dark regions of an image, where the slope of a pure power function is in nite at zero. The red tristimulus (linear light) component may be gamma-corrected at the camera by applying the following convention:

R0

709

=



4:5R 1:099R0:45

if R0:018 0:099 if 0:018 < R

(1.18)

0 the resulting gamma corrected with R denoting the linear light and R709 value. The computations are identical for the G and B components. The linear R; G, and B are normally in the range [0; 1] when color images are used in digital form. The software library translates these oating point values to 8-bit integers in the range of 0 to 255 for use by the graphics hardware. Thus, the gamma corrected value should be:

R0 = 255R 1

(1.19)

The constant 255 in (1.19) is added during the A/D process. However, gamma correction is usually performed in cameras, and thus, pixel values are in most cases nonlinear voltages. Thus, intensity values stored in the framebu er of the computing device are gamma corrected on-the- y by hardware look up tables on their way to the computer monitor display. Modern image processing systems utilize a wide variety of sources of color images, such as images captured by digital cameras, scanned images, digitized video frames and computer generated images. Digitized video frames usually have a gamma correction value between 0.5 and 0.45. Digital scanners assume an output gamma in the range of 1.4 to 2.2 and they perform their gamma correction accordingly. For computer generated images the gamma correction value is usually unknown. In the absence of the actual gamma value the recommended gamma correction is 0.45.

16

In summary, pixel values alone cannot specify the actual color. The gamma correction value used for capturing or generating the color image is needed. Thus, two images which have been captured with two cameras operating under di erent gamma correction values will represent colors differently even if the same primaries and the same white reference point are used. 1.5 Linear and Non-linear RGB Color Spaces

The image processing literature rarely discriminates between linear RGB and non-linear (R0 G0 B0 ) gamma corrected values. For example, in the JPEG and MPEG standards and in image ltering, non-linear RGB (R0 G0 B0 ) color values are implicit. Unacceptable results are obtained when JPEG or MPEG schemes are applied to linear RGB image data [4]. On the other hand, in computer graphics, linear RGB values are implicitly used [4]. Therefore, it is very important to understand the di erence between linear and non-linear RGB values and be aware of which values are used in an image processing application. Hereafter, the notation R0 G0 B0 will be used for non-linear RGB values so that they can be clearly distinguished from the linear RGB values.

1.5.1 Linear RGB Color Space As mentioned earlier, intensity is a measure, over some interval of the electromagnetic spectrum, of the ow of power that is radiated from an object. Intensity is often called a linear light measure. The linear R value is proportional to the intensity of the physical power that is radiated from an object around the 700 nm band of the visible spectrum. Similarly, a linear G value corresponds to the 546:1 nm band and a linear B value corresponds to the 435:8 nm band. As a result the linear RGB space is device independent and used in some color management systems to achieve color consistency across diverse devices. The linear RGB values in the range [0, 1] can be converted to the corresponding CIE XYZ values in the range [0, 1] using the following matrix transformation [4]: 2

X 3 2 0:4125 0:3576 0:1804 3 2 R 3 4 Y 5 = 4 0:2127 0:7152 0:0722 5 4 G 5 Z 0:0193 0:1192 0:9502 B

(1.20)

The transformation from CIE XYZ values in the range [0, 1] to RGB values in the range [0, 1] is de ned by: 2

R 3 2 3:2405 1:5372 0:4985 3 2 X 3 4 G 5 = 4 0:9693 1:8760 0:0416 5 4 Y 5 B 0:0556 0:2040 1:0573 Z

(1.21)

17

Alternatively, tristimulus XYZ values can be obtained from the linear RGB values through the following matrix [5]: 2

X 3 2 0:490 0:310 0:200 3 2 R 3 4 Y 5 = 4 0:117 0:812 0:011 5 4 G 5 Z 0:000 0:010 0:990 B

(1.22)

The linear RGB values are a physical representation of the chromatic light radiated from an object. However, the perceptual response of the human visual system to radiate red, green, and blue intensities is non-linear and more complex. The linear RGB space is, perceptually, highly non-uniform and not suitable for numerical analysis of the perceptual attributes. Thus, the linear RGB values are very rarely used to represent an image. On the contrary, non-linear R0 G0 B0 values are traditionally used in image processing applications such as ltering.

1.5.2 Non-linear RGB Color Space When an image acquisition system, e.g. a video camera, is used to capture the image of an object, the camera is exposed to the linear light radiated from the object. The linear RGB intensities incident on the camera are transformed to non-linear RGB signals using gamma correction. The transformation to non-linear R0 G0 B0 values in the range [0, 1] from linear RGB values in the range [0, 1] is de ned by:

R0

(

=

G0 = B0 =

( (

4:5R; 1 1:099R C

4:5G; 1 1:099G C

4:5B; 1 1:099B C

0:099; 0:099; 0:099;

if R  0:018 otherwise

if G  0:018 otherwise

if B  0:018 otherwise

(1.23)

where C is known as the gamma factor of the camera or the acquisition device. The value of C that is commonly used in video cameras is 0:145 (' 2:22) [4]. The above transformation is graphically depicted in Figure 1.7. The linear segment near low intensities minimizes the e ect of sensor noise in practical cameras and scanners. Thus, the digital values of the image pixels acquired from the object and stored within a camera or a scanner are the R0 G0 B0 values usually converted to the range of 0 to 255. Three bytes are then required to represent the three components, R0 , G0 , and B 0 of a color image pixel with one byte for each component. It is these non-linear R0 G0 B0 values that are stored as image data les in computers and are used in image processing applications. The RGB symbol used in image processing literature usually refers to the R0 G0 B0

18 1

0.8

NMSE x 100

0.6

0.4

0.2

0

Linear to Nonlinear Light Transformation Fig. 1.7.

−0.2 0

0.1

0.2

0.3

0.4

0.5 0.6 BVOSF #

0.7

0.8

0.9

1

values and, therefore, care must be taken in color space conversions and other relevant calculations. Suppose the acquired image of an object needs to be displayed in a display device such as a computer monitor. Ideally, a user would like to see (perceive) the exact reproduction of the object. As pointed out, the image data is in R0 G0 B0 values. Signals (usually voltage) proportional to the R0 G0 B0 values will be applied to the red, green, and blue guns of the CRT (Cathode Ray Tube) respectively. The intensity of the red, green, and blue lights generated by the CRT is a non-linear function of the applied signal. The non-linearity of the CRT is a function of the electrostatics of the cathode and the grid of the electron gun. In order to achieve correct reproduction of intensities, an ideal monitor should invert the transformation at the acquisition device (camera) so that the intensities generated are identical to the linear RGB intensities that were radiated from the object and incident in the acquisition device. Only then will the perception of the displayed image be identical to the perceived object. A conventional CRT has a power-law response, as depicted in Figure 1.8. This power-law response, which inverts the non-linear (R0 G0 B0 ) values in the range [0, 1] back to linear RGB values in the range [0, 1], is de ned by the following power function [4]:

R G

= =

( R0 4:5 

;

R0

( G0 4:5  0

;

+ 0:099 1:099

G + 0:099 1:099

 D

if R0  0:018 otherwise

 D

if G0  0:018 otherwise

(1.24)

19

B

=

( B0 4:5  0

;

B + 0:099 1:099

 D

if B 0  0:018 otherwise

The value of the power function, D , is known as the gamma factor of the display device or CRT. Normal display devices have D in the range of 2:2 to 2:45. For exact reproduction of the intensities, gamma factor of the display device must be equal to the gamma factor of the acquisition device ( C = D ). Therefore, a CRT with a gamma factor of 2:2 should correctly reproduce the intensities. 1 0.9

Linear Light Intensities (R, G, B)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

Non-linear to linear Light Transformation Fig. 1.8.

0.1

0.2

0.3 0.4 0.5 0.6 0.7 Non−linear Light Intensties (R’, G’, B’)

0.8

0.9

1

The transformations that take place throughout the process of image acquisition to image display and perception are illustrated in Figure 1.9. R Object

G B

Fig. 1.9.

Digital Video Camera

R’ G’ B’

Storage

G’ B’

Perceived Intensities

R

R’ CRT

G

HVS

B

R’ G’ B’

Transformation of Intensities from Image Capture to Image Display

It is obvious from the above discussion that the R0 G0 B0 space is a device dependent space. Suppose a color image, represented in the R0 G0 B0 space, is displayed on two computer monitors having di erent gamma factors. The red, green, and blue intensities produced by the monitors will not be identical and the displayed images might have di erent appearances. Device dependent spaces cannot be used if color consistency across various devices, such as display devices, printers, etc., is of primary concern. However, similar devices

20

(e.g. two computer monitors) usually have similar gamma factors and in such cases device dependency might not be an important issue. As mentioned before, the human visual system has a non-linear perceptual response to intensity, which is roughly logarithmic and is, approximately, the inverse of a conventional CRT's non-linearity [4]. In other words, the perceived red, green, and blue intensities are approximately related to the R0 G0 B0 values. Due to this fact, computations involving R0 G0 B0 values have an approximate relation to the human color perception and the R0 G0 B0 space is less perceptually non-uniform relative to the CIE XYZ and linear RGB spaces [4]. Hence, distance measures de ned between the R0 G0 B0 values of two color vectors provide a computationally simple estimation of the error between them. This is very useful for real-time applications and systems in which computational resources are at premium. However, the R0 G0 B0 space is not adequately uniform, and it cannot be used for accurate perceptual computations. In such instances, perceptually uniform color spaces (e.g. L u v and L a b ) that are derived based on the attributes of human color perception are more desirable than the R0 G0 B0 space [4]. 1.6 Color Spaces Linearly Related to the RGB

In transmitting color images through a computer-centric network, all three primaries should be transmitted. Thus, storage or transmission of a color image using RGB components requires a channel capacity three times that of gray scale images. To reduce these requirements and to boost bandwidth utilization, the properties of the human visual system must be taken into consideration. There is strong evidence that the human visual system forms an achromatic channel and two chromatic color-di erence channels in the retina. Consequently, a color image can be represented as a wide band component corresponding to brightness, and two narrow band color components with considerably less data rate than that allocated to brightness. Since the large percentage (around 60%) of brightness is attributed to the green primary, then it is advantageous to base the color components on the other two primaries. The simplest way to form the two color components is to remove them by subtraction, (e.g. the brightness from the blue and red primaries). In this way the unit RGB color cube is transformed into the luminance Y and two color di erence components B Y and R Y [33], [34]. Once these color di erence components have been formed, they can be sub-sampled to reduce the bandwidth or data capacity without any visible degradation in performance. The color di erence components are calculated from nonlinear gamma corrected values R0 ,G0 ,B 0 rather than the tristimulus (linear voltage) R; G; B primary components. According to the CIE standards the color imaging system should operate similarly to a gray scale system, with a CIE luminance component Y formed

21

as a weighted sum of RGB tristimulus values. The coeÆcients in the weighted sum correspond to the sensitivity of the human visual system to each of the RGB primaries. The coeÆcients are also a function of the chromaticity of the white reference point used. International agreement on the REC. 709 standard provides a value for the luminance component based on the REC. 0 luminance equation is: 709 primaries [24]. Thus, the Y709

0 = 0:2125R0 + 0:7154G0 + 0:0721B 0 Y709 (1.25) 709 709 709 0 , B709 0 and G0709 are the gamma-corrected (nonlinear) values of where R709 0 0 and the three primaries. The two color di erence components B709 Y709 0 0 R709 Y709 can be formed on the basis of the above equation. Various scale factors are applied to the basic color di erence components for di erent applications. For example, the Y 0 PR PB is used for component analog video, such as BetaCam, and Y 0 CB CR for component digital video,

such as studio video, JPEG and MPEG. Kodak's YCC (PhotoCD model) uses scale factors optimized for the gamut of lm colors [31]. All these systems 0 ; B709 0 Y709 0 ; R709 0 0 ) which are scaled utilize di erent versions of the (Y709 Y709 to place the extrema of the component signals at more convenient values. In particular, the Y 0 PR PB system used in component analog equipment is de ned by the following set: 2

32 03 0 3 2 Y601 0:299 0:587 0:114 R 4 PB 5 = 4 0:168736 0:331264 0:5 5 4 G0 5 PR 0:5 0:418686 0:081312 B0

and

(1.26)

2

32 0 3 R0 3 2 1: 0: 1:402 Y601 4 G0 5 = 4 1: 0:344136 0:714136 5 4 PB 5 B0 1: 1:772 0: PR

(1.27)

The rst row comprises the luminance coeÆcients which sum to unity. For each of the other two columns the coeÆcients sum to zero, a necessity for color di erence formulas. The 0.5 weights re ect the maximum excursion of PB and PR for the blue and the red primaries. The Y0 CB CR is the Rec ITU-R BT. 601-4 international standard for studio quality component digital video. The luminance signal is coded in 8 bits. The Y 0 has an excursion of 219 with an o set of 16, with the black point coded at 16 and the white at code 235. Color di erences are also coded in 8-bit forms with excursions of 112 and o set of 128 for a range of 16 through 240 inclusive. To compute Y0 CB CR from nonlinear R0 G0 B0 in the range of [0; 1] the following set should be used: 2

0 3 2 16 3 2 Y601 4 CB 5 = 4 128 5 + 4 CR 128

32

3

65:481 128:553 24:966 R0 37:797 74:203 112:0 5 4 G0 5 112:0 93:786 18:214 B0

(1.28)

22

with the inverse transform 2 03 2 3 R0 0:00456821 0:0 0:00625893 4 G 5 = 4 0:00456621 0:00153632 0:00318811 5  B0 0:00456621 0:00791071 0:0 02

0 3 Y601 @4 PB 5 PR

2

31

16 4 128 5A 128

(1.29)

When 8-bit R0 G0 B0 are used, black is coded at 0 and white is at 255. To encode Y0 CB CR from R0 G0 B0 in the range of [0; 255] using 8-bit binary arithmetic the transformation matrix should be scaled by 256 255 . The resulting transformation pair is as follows: 2 0 3 2 3 2 32 0 3 Y601 16 65:481 128:553 24:966 R255 1 4 PB 5 = 4 128 5 + 4 37:797 74:203 112:0 5 4 G0255 5 (1.30) 256 0 PR 128 112:0 93:786 18:214 B255

0 is the gamma-corrected value, using a gamma-correction lookup where R255 table for 1 . This yields the RGB intensity values with integer components between 0 and 255 which are gamma-corrected by the hardware. To obtain R0 G0 B0 values in the range [0; 255] from Y0 CB CR using 8-bit arithmetic the following transformation should be used: 3 0:0 0:00625893 R0 3 1 2 0:00456821 4 0:00456621 0:00153632 0:00318811 5 4 G0 5 = 256 0 0:00456621 0:00791071 0:0 B 02 0 3 2 31 Y601 16 @4 PB 5 4 128 5A PR 128 2

(1.31)

1 may be larger than unity and, Some of the coeÆcients when scaled by 256 thus some clipping may be required so that they fall within the acceptable RGB range. The Kodak YCC color space is another example of a predistorted color space, which has been designed for the storage of still color images on the Photo-CD. It is derived from the predistorted (gamma-corrected) R0 G0 B0 values using the ITU-R BT.709 recommended white reference point, primaries, and gamma correction values. The YCC space is similar to the Y0 CB CR discussed, although scaling of B 0 Y 0 and R0 Y 0 is asymmetrical in order to accommodate a wide color gamut, similar to that of a photographic lm. In particular the following relationship holds for Photo-CD compressed formats: 0 (1.32) Y 0 = 1255 :402 Y601 C1 = 156 + 111:40(B0 Y 0) (1.33)

23

C2 = 137 + 135:64(R0 Y 0)

(1.34)

The two chrominance components are compressed by factors of 2 both horizontally and vertically. To reproduce predistorted R0 G0 B0 values in the range of [0; 1] from integer PhotoYCC components the following transform is applied: 2

3 0:0 0:0051681 R00 3 1 2 0:00549804 4G 5 = 4 0:00549804 0:0015446 0:0026325 5 256 0 0:00549804 0:0079533 0:0 B 02 0 3 2 31 Y 0 @4 C1 5 4 156 5A (1.35) C2 137 The B 0 Y 0 and R0 Y 0 components can be converted into polar coordinates

to represent the perceptual attributes of hue and saturation. The values can be computed using the following formulas [34]:

0 0 H = tan 1( BR0 YY 0 ) S = ((B0 Y 0 )2 + (R0 Y 0)2) 21

(1.36) (1.37)

where the saturation S is the length of the vector from the origin of the chromatic plane to the speci c color and the hue H is the angle between the R0 Y 0 axis and the saturation component [33]. 1.7 The YIQ Color Space

The YIQ color speci cation system, used in commercial color TV broadcasting and video systems, is based upon the color television standard that was adopted in the 1950s by the National Television Standard committee (NTSC) [10], [1], [27], [28]. Basically, YIQ is a recoding of non-linear R0 G0 B0 for transmission eÆciency and for maintaining compatibility with monochrome TV standards [1], [4]. In fact, the Y component of the YIQ system provides all the video information required by a monochrome television system. The YIQ model was designed to take advantage of the human visual system's greater sensitivity to change in luminance than to changes in hue or saturation [1]. Due to these characteristics of the human visual system, it is useful in a video system to specify a color with a component representative of luminance Y and two other components: the in-phase I , an orange-cyan axis, and the quadrature Q component, the magenta-green axis. The two chrominance components are used to jointly represent hue and saturation .

24

With this model, it is possible to convey the component representative of luminance Y in such a way that noise (or quantization) introduced in transmission, processing and storage is minimal and has a perceptually similar e ect across the entire tone scale from black to white [4]. This is done by allowing more bandwidth (bits) to code the luminance (Y ) and less bandwidth (bits) to code the chrominance (I and Q) for eÆcient transmission and storage purposes without introducing large perceptual errors due to quantization [1]. Another implication is that the luminance (Y ) component of an image can be processed without a ecting its chrominance (color content). For instance, histogram equalization to a color image represented in YIQ format can be done simply by applying histogram equalization to its Y component [1]. The relative colors in the image are not a ected by this process. The ideal way to accomplish these goals would be to form a luminance component (Y ) by applying a matrix transform to the linear RGB components and then subjecting the luminance (Y ) to a non-linear transfer function to achieve a component similar to lightness L . However, there are practical reasons in a video system why these operations are performed in the opposite order [4]. First, gamma correction is applied to each of the linear RGB. Then, a weighted sum of the nonlinear components is computed to form a component representative of luminance Y . The resulting component (luma) is related to luminance but is not the same as the CIE luminance Y although the same symbol is used for both of them. The nonlinear RGB to YIQ conversion is de ned by the following matrix transformation [4], [1]: 2

Y 3 2 0:299 0:587 0:114 3 2 R00 3 4 I 5 = 4 0:596 0:275 0:321 5 4 G 5 Q 0:212 0:523 0:311 B0

(1.38)

As can be seen from the above transformation, the blue component has a small contribution to the brightness sensation (luma Y ) despite the fact that human vision has extraordinarily good color discrimination capability in the blue color [4]. The inverse matrix transformation is performed to convert YIQ to nonlinear R0 G0 B0 . Introducing a cylindrical coordinate transformation, numerical values for hue and saturation can be calculated as follows:

HY IQ = tan 1( QI ) 1 SY IQ = (I 2 + Q2) 2

(1.39) (1.40)

As described it, the YIQ model is developed from a perceptual point of view and provides several advantages in image coding and communications applications by decoupling the luma (Y ) and chrominance components (I and Q). Nevertheless, YIQ is a perceptually non-uniform color space and thus not appropriate for perceptual color di erence quanti cation. For example,

25

the Euclidean distance is not capable of accurately measuring the perceptual color distance in the perceptually non-uniform YIQ color space. Therefore, YIQ is not the best color space for quantitative computations involving human color perception. 1.8 The HSI Family of Color Models

In image processing systems, it is often convenient to specify colors in a way that is compatible with the hardware used. The di erent variants of the RGB monitor model address that need. Although these systems are computationally practical, they are not useful for user speci cation and recognition of colors. The user cannot easily specify a desired color in the RGB model. On the other hand, perceptual features, such as perceived luminance (intensity), saturation and hue correlate well with the human perception of color . Therefore, a color model in which these color attributes form the basis of the space is preferable from the users point of view. Models based on lightness, hue and saturation are considered to be better suited for human interaction. The analysis of the user-oriented color spaces starts by introducing the family of intensity, hue and saturation (HSI) models [28], [29]. This family of models is used primarily in computer graphics to specify colors using the artistic notion of tints, shades and tones. However, all the HSI models are derived from the RGB color space by coordinate transformations. In a computer centered image processing system, it is necessary to transform the color coordinates to RGB for display and vice versa for color manipulation within the selected space. The HSI family of color models use approximately cylindrical coordinates. The saturation (S ) is proportional to radial distance, and the hue (H ) is a function of the angle in the polar coordinate system. The intensity (I ) or lightness (L) is the distance along the axis perpendicular to the polar coordinate plane. The dominant factor in selecting a particular HSI model is the de nition of the lightness, which determines the constant-lightness surfaces, and thus, the shape of the color solid that represents the model. In the cylindrical models, the set of color pixels in the RGB cube which are assigned a common lightness value (L) form a constant-lightness surface. Any line parallel to the main diagonal of the color RGB cube meets the constantlightness surface at most in one point. The HSI color space was developed to specify, numerically, the values of hue, saturation, and intensity of a color [4]. The HSI color model is depicted in Figure 1.10. The hue (H ) is measured by the angle around the vertical axis and has a range of values between 0 and 360 degrees beginning with red at 0Æ . It gives a measure of the spectral composition of a color. The saturation (S ) is a ratio that ranges from 0 (i.e. on the I axis), extending radially outwards to a maximum value of 1 on the surface of the cone. This component refers to the proportion of pure light of the dominant wavelength and indicates how

26

far a color is from a gray of equal brightness. The intensity (I ) also ranges between 0 and 1 and is a measure of the relative brightness. At the top and bottom of the cone, where I = 0 and 1 respectively, H and S are unde ned and meaningless. At any point along the I axis the Saturation component is zero and the hue is unde ned. This singularity occurs whenever R = G = B . I=1

White

gray-scale

purely saturated Red

Yellow

P

H Magenta

S

Intensity

Green

Cyan

Blue

Black

I=0

Fig.

1.10.

Color Space

The HSI

The HSI color model owes its usefulness to two principal facts [1], [28]. First, like in the YIQ model, the intensity component I is decoupled from the chrominance information represented as hue H and saturation S . Second, the hue (H ) and saturation (S ) components are intimately related to the way in which humans perceive chrominance [1]. Hence, these features make the HSI an ideal color model for image processing applications where the chrominance is of importance rather than the overall color perception (which is determined by both luminance and chrominance). One example of the usefulness of the

27

HSI model is in the design of imaging systems that automatically determine the ripeness of fruits and vegetables [1]. Another application is color image histogram equalization performed in the HSI space to avoid undesirable shifts in image hue [10]. The simplest way to choose constant-lightness surfaces is to de ne them as planes. A simpli ed 0de nition of the perceived lightness in terms of the 0 0 R,G,B values is L = R +G3 +B , where the normalization is used to control the range of lightness values. The di erent constant-lightness surfaces are perpendicular to the main diagonal of the RGB cube and parallel to each other. The shape of a constant lightness surface is a triangle for 0L M3 and 2M 3 LM with L2[0; M ] and where M is a given lightness threshold. The theory underlying the derivation of conversion formulas between the RGB space and HSI space is described in detail in [1], [28]. The image processing literature on HSI does not clearly indicate whether the linear or the non-linear RGB is used in these conversions [4]. Thus the non-linear (R0 G0 B0 ), which is implicit in traditional image processing, shall be used. But this ambiguity must be noted. The conversion from R0 G0 B0 (range [0, 1]) to HSI (range [0, 1]) is highly nonlinear and considerably complicated: "

#

R0 G0) + (R0 B0 )] H = cos (1.41) [(R0 G0 )2 + (R0 B0 )(G0 B0)] 21 (1.42) S = 1 (R0 + G30 + B0 ) [min(R0; G0 ; B0 )] (1.43) I = 13 (R0 + G0 + B0) where H = 360Æ H , if (B 0 =I ) > (G0 =I ). Hue is normalized to the range [0, 1] by letting H = H=360Æ. Hue (H ) is not de ned when the saturation (S ) is zero. Similarly, saturation (S ) is unde ned if intensity (I ) is zero. To transform the HSI values (range [0, 1]) back to the R0 G0 B0 values (range [0, 1]), then the H values in [0, 1] range must rst be converted back to the un-normalized [0o , 360o] range by letting H = 360Æ (H ). For the R0 G0 (red and green) sector (0Æ < H  120Æ), the conversion is: B0 = I (1 S ) (1.44)   H R0 = I 1 + cos S(60cos (1.45) Æ H) G0 = 3I (R0 + B0) (1.46) The conversion for the G0 B 0 (green and blue) sector (120Æ < H  240Æ ) 1

is given by:

H

=

H

120Æ

1 2 [(

(1.47)

28

R0

=

I (1

G0

=

I

B0

= 3I

(R0 +

H G0

=

H = I (1

240Æ

B0

=

R0

= 3I

S)



1 +

S

cos H cos (60Æ H)

(1.49)

G0 )

(1.50)

Finally, for the B 0 R0 (blue and red) sector (240Æ corresponding equations are:

I

1 +

S cos H cos (60Æ H)

(G0 +

< H 

360Æ ), the (1.51)

S)



(1.48)



B0)



(1.52) (1.53) (1.54)

Fast versions of the transformation, containing fewer multiplications and avoiding square roots, are often used in hue calculations. Also, formulas without trigonometric functions can be used. For example, hue can be evaluated using the following formula [44]: 1. If B 0 = min(R0 ;

G0 ; B0 ) then 0 0 H = 3(R0G+ G0B 2B0 2. If R0 = min(R0 ; G0 ; B 0 ) then 0 0 H = R0 +B G0 R 2B0 + 31 3. If G0 = min(R0 ; G0 ; B 0 ) then 0 0 H = R0 +B G0 R 2B0 + 32

(1.55)

(1.56)

(1.57)

Although the HSI model is useful in some image processing applications, the formulation of it is awed with respect to the properties of color vision. The usual formulation makes no clear reference to the linearity or nonlinearity of the underlying RGB and to the lightness perception of human vision [4]. It computes the brightness as (R0 + G0 + B 0 ) =3 and assigns the name intensity I . Recall that the brightness perception is related to luminance Y . Thus, this computation con icts with the properties of color vision [4]. In addition to this, there is a discontinuity in the hue at 360o and thus, the formulation introduces visible discontinuities in the color space. Another major disadvantage of the HSI space is that it is not perceptually uniform.

29

Consequently, the HSI model is not very useful for perceptual image computation and for conveyance of accurate color information. As such, distance measures, such as the Euclidean distance, cannot estimate adequately the perceptual color distance in this space. The model discussed above is not the only member of the family. In particular, the double hexcone HLS model can be de ned by simply modifying the constant-lightness surface. It is depicted in Figure 1.11. In the HLS model the lightness is de ned as:

L = max (R ; G ; B ) +2 min (R ; G ; B ) 0

0

0

0

0

0

(1.58)

If the maximum and the minimum value coincide then S = 0 and the hue is unde ned. Otherwise based on the lightness value, saturation is de ned as follows: Max Min) 1. If L0:5 then S = ((Max +Min) Min) 2. If L > 0:5 then S = (2(Max Max Min)

where Max = max (R0 ; G0 ; B 0 ) and Min = min (R0 ; G0 ; B 0 ) respectively. Similarly, hue is calculated according to: 1. If R0 = Max then

G0 B0 H = Max Min 0 2. If G = Max then B 0 R0 H = Max Min 0 3. If B = Max then R0 G0 H = 4 + Max Min

(1.59)

(1.60)

(1.61)

The backward transform starts by rescaling the hue angles into the range [0; 6]. Then, the following cases are considered:

1. If S = 0, hue is unde ned and (R0 ; G0 ; B 0 ) = (L; L; L) 2. Otherwise, i = Floor(H ) (the Floor(X ) function returns the largest integer which is not greater than X ), in which i is the sector number of the hue and f = H i is the hue value in each sector. The following cases are considered:

 if LLcritical = 255 2 then

Max = L(1 + S ) Mid1 = L(2fS + 1 S )

(1.62) (1.63)

30

Mid2 = L(2(1 f )S + 1 S ) Min = L(1 S )  if L > Lcritical = 255 2 then Max = L(1 S ) + 255S Mid1 = 2((1 f )S (0:5 f )Max) Mid2 = 2(fL (f 0:5)Max) Min = L(1 + S ) 255S

(1.64) (1.65) (1.66) (1.67) (1.68) (1.69)

Based on these intermediate values the following assignments should be made: 1. 2. 3. 4. 5. 6.

if i = 0 then (R0 ; G0 ; B 0 ) = (Max; Mid1; Min) if i = 1 then (R0 ; G0 ; B 0 ) = (Mid2; Max; Min) if i = 2 then (R0 ; G0 ; B 0 ) = (Min; Max; Mid1) if i = 3 then (R0 ; G0 ; B 0 ) = (Min; Mid2; Max) if i = 4 then (R0 ; G0 ; B 0 ) = (Mid1; Min; Max) if i = 5 then (R0 ; G0 ; B 0 ) = (Max; Min; Mid2)

The HSV (hue, saturation, value) color model also belongs to this group of hue-oriented color coordinate systems which correspond more closely to the human perception of color. This user-oriented color space is based on the intuitive appeal of the artist's tint, shade, and tone. The HSV coordinate system, proposed originally in Smith [36], is cylindrical and is conveniently represented by the hexcone model shown in Figure 1.12 [23], [27]. The set of equations below can be used to transform a point in the RGB coordinate system to the appropriate value in the HSV space. 1 H1 = cos 1f p(R2 [(RG)2 +G)(R+ (RB)(BG)] B) g H = H1 ; if B  G H = 360Æ H1 ; if B > G

G; B) min(R; G; B) S = max(R; max( R; G; B) V = max(R; G; B)

(1.70) (1.71) (1.72) (1.73)

(1.74) 255 Here the RGB values are between 0 and 255. A fast algorithm used here to convert the set of RGB values to the HSV color space is provided in [23]. The important advantages of the HSI family of color spaces over other color spaces are:

31 White L=0

Green

Yellow S

P H

Cyan

Red Lightness (L) Magenta

Blue

Fig.

1.11.

The HLS

Fig.

1.12.

The HSV

Color Space

Black L=1

V Green

Yellow S

P H

Cyan

Blue

Red

White V=0

Magenta Value (V)

Black V=1

Color Space

32

  

Good compatibility with human intuition. Separability of chromatic values from achromatic values. The possibility of using one color feature, hue, only for segmentation purposes. Many image segmentation approaches take advantage of this. Segmentation is usually performed in one color feature (hue) instead of three, allowing the use of much faster algorithms.

as:

  

However, hue-oriented color spaces have some signi cant drawbacks, such

singularities in the transform, e.g. unde ned hue for achromatic points sensitivity to small deviations of RGB values near singular points numerical instability when operating on hue due to the angular nature of the feature.

1.9 Perceptually Uniform Color Spaces

Visual sensitivity to small di erences among colors is of paramount importance in color perception and speci cation experiments. A color system that is to be used for color speci cation should be able to represent any color with high precision. All systems currently available for such tasks are based on the CIE XYZ color model. In image processing, it is of particular interest in a perceptually uniform color space where a small perturbation in a component value is approximately equally perceptible across the range of that value. The color speci cation systems discussed until now, such as the XYZ or RGB tristimulus values and the various RGB hardware oriented systems are far from uniform. Recalling the discussion of YIQ space earlier in this chapter, the ideal way to compute the perceptual components representative of luminance and chrominance is to appropriately form the matrix of linear RGB components and then subject them to nonlinear transfer functions based on the color sensing properties of the human visual system. A similar procedure is used by CIE to formulate the L u v and L a b spaces. The linear RGB components are rst transformed to CIE XYZ components using the appropriate matrix. Finding a transformation of XYZ which transforms this color space into a reasonably perceptually uniform color space consumed a decade or more at the CIE and in the end, no single system could be agreed upon [4], [5]. Finally, in 1976, CIE standardized two spaces, L u v and L a b , as perceptually uniform. They are slightly di erent because of the di erent approaches to their formulation [4], [5], [25], [30]. Nevertheless, both spaces are equally good in perceptual uniformity and provide very good estimates of color di erence (distance) between two color vectors. Both systems are based on the perceived lightness L and a set of opponent color axes, approximately red-green versus yellow-blue. According to

33

the CIE 1976 standard, the perceived lightness of a standard observer is assumed to follow the physical luminance (a quantity proportional to intensity) according to a cubic root law. Therefore, the lightness L is de ned by the CIE as: (

1

Y 3 L = 116( Yn )

16 if Y 903:3( Yn ) if 1 3

Y Yn Y Yn

> 0:008856 0:008856

(1.75)

where Yn is the physical luminance of the white reference point . The range of values for L is from 0 to 100 representing a black and a reference white respectively. A di erence of unity between two L values, the so-called L is the threshold of discrimination. This standard function relates perceived lightness to linear light luminance. Luminance can be computed as a weighted sum of red, green and blue components. If three sources appear red, green and blue and have the same power in the visible spectrum, the green will appear the brightest of the three because the luminous eÆciency function peaks in the green region of the spectrum. Thus, the coeÆcients that correspond to contemporary CRT displays (ITU-R BT. 709 recommendation) [24] re ect that fact, when using the following equation for the calculation of the luminance:

Y709 = 0:2125R + 0:7154G + 0:0721B (1.76) The u and v  components in L u v space and the the a and b compo-

nents in L a b space are representative of chrominance. In addition, both are device independent color spaces. Both these color spaces are, however, computationally intensive to transform to and from the linear as well as nonlinear RGB spaces. This is a disadvantage if real-time processing is required or if computational resources are at a premium.

1.9.1 The CIE L u v Color Space The rst uniform color space standardized by CIE is the L u v illustrated in Figure 1.13. It is derived based on the CIE XYZ space and white reference point [4], [5]. The white reference point [Xn ; Yn ; Zn ] is the linear RGB = [1; 1; 1] values converted to the XYZ values using the following transformation: 2

Xn 3 2 0:4125 0:3576 0:1804 3 2 1 3 4 Yn 5 = 4 0:2127 0:7152 0:0722 5 4 1 5 Zn 0:0193 0:1192 0:9502 1

(1.77)

Alternatively, white reference points can be de ned based on the Federal Communications Commission (FCC) or the European Broadcasting Union (EBU) RGB values using the following transformations respectively [35]:

34 2

Xn 3 2 0:607 0:174 0:200 3 2 1 3 4 Yn 5 = 4 0:299 0:587 0:114 5 4 1 5 Zn 0:000 0:066 1:116 1 2 3 2 32 3 Xn 0:430 0:342 0:178 1 4 Yn 5 = 4 0:222 0:702 0:071 5 4 1 5 Zn 0:020 0:130 0:939 1

(1.78) (1.79)

Fig. 1.13.

Color Space

The L u v

The lightness component L is de ned by the CIE as a modi ed cube root of luminance Y [4], [31], [37], [32]:

L

=

8  1 < 116 Y 3 Yn  : 903 3 Y

:

Yn

16

if

Y Yn

> 0:008856

E otherwise

(1.80)

The CIE de nition of L applies a linear segment near black for (Y=Yn )  0:008856. This linear segment is unimportant for practical purposes [4]. L has a range [0, 100], and a L of unity is roughly the threshold of visibility [4]. Computation of u and v  involves intermediate u0 , v 0 , u0n , and vn0 quantities de ned as: u0 = X + 154XY + 3Z v0 = X + 159YY + 3Z (1.81) (1.82) u0n = X + 154XYn + 3Z vn0 = X + 159YYn + 3Z n n n n n n with the CIE XYZ values computed through (1.20) and (1.21). Finally, u and v  are computed as: u = 13L(u0 u0n) (1.83)

v

= 13L (v 0

vn0 )

(1.84)

Conversion from L u v to XYZ is accomplished by ignoring the linear segment of L . In particular, the linear segment can be ignored if the luminance variable Y is represented with eight bits of precision or less.

35

Then, the luminance Y is given by:

Y

=



L

To compute X



+ 16 3 Yn 116 and Z , rst compute u0 and v 0 as:

(1.85)

u + u0 v0 = v + v0 n n 13L 13L Finally, X and Z are given by:   1 u0 (9:0 15:0 v 0 ) Y 0 X = 4 + 15:0 u Y v0   1 (9:0 15:0 v 0 ) Y X Z = 3 v0 u0

=

(1.86) (1.87) (1.88)

Consider two color vectors xL u v and yL u v in the L u v space represented as:

xL u v = [xL ;

xu  ; xv  ]T

and yL u v = [yL ;

yu ; yv ]T (1.89)

The perceptual color distance in the L u v space, called the total color  in [5], is de ned as the Euclidean distance (L2 norm) between di erence Euv the two color vectors xL u v and yL u v :

Euv = jjxLuv

yL u v jjL2

h

Euv = (xL yL )2 + (xu yu )2 + (xv yv )2

i 21

(1.90)

It should be mentioned that in a perceptually uniform space, the Euclidean distance is an accurate measure of the perceptual color di erence [5]. As such,  is widely used for the evaluation of color the color di erence formula Euv reproduction quality in an image processing system, such as color coding systems.

1.9.2 The CIE L a b Color Space The L a b color space is the second uniform color space standardized by

CIE. It is also derived based on the CIE XYZ space and white reference point [5], [37]. The lightness L component is the same as in the L u v space. The L ,  a and b components are given by:

Y  13 Yn "  31 X 500 Xn

L = 116 a =



16 

(1.91)

Y  13 Yn

#

(1.92)

36 "

#



Z  13 (1.93) Zn with the constraint that XXn ; YYn ; ZZn > 0:01. This constraint will be satis ed b = 200

Y  13 Yn

for most practical purposes [4]. Hence, the modi ed formulae described in [5] for cases that do not not satisfy this constraint can be ignored in practice [4], [10]. The back conversion to the XYZ space from the L a b space is done by rst computing the luminance Y , as described in the back conversion of L u v , followed by the computation of X and Z :

Y X Z

= = =



L

 + 16 3 116

a

+

500

b

+

200



Yn

Y  13 Yn



(1.94) !3

Y  13 Yn

Xn

!3

(1.95)

Zn

(1.96)

The perceptual color distance in the L a b is similar to the one in the L u v . The two color vectors xL a b and yL a b in the L a b space can be represented as:

xL ab = [xL ;

xa ; xb ]T

and

yL ab = [yL ;

ya ; yb ]T (1.97)

The perceptual color distance (or total color di erence) in the L a b space, Eab , between two color vectors xLuv and yLuv is given by the Euclidean distance (L2 norm):

Eab = jjxLab

yL a b jjL2

h

i1

= (xL

yL )2 + (xa ya )2 + (xb yb )2 2 (1.98)  is applicable to the observing conditions The color di erence formula Eab  . However, this simple normally found in practice, as in the case of Euv

di erence formula values color di erences too strongly when compared to experimental results. To correct the problem a new di erence formula was recommended in 1994 by CIE [25], [31]. The new formula is as follows: 1

2 2 2 2 Eab 94 = [ KLSyLL ) + (xaKcSyca ) + (xbK H SyHb ) ] (1.99) where the factors KL , Kc , KH are factors to match the perception of the background conditions, and SL , Sc , SH are linear functions of the di erences  have been in chroma. Standard reference values for the calculation for Eab 94

(xL

37

speci ed by the CIE. Namely, the values most often in use are KL = Kc = KH = 1, SL = 1, Sc = 1 + 0:045((xa ya ) and SH = 1 + 0:015((xb yb ) respectively. The parametric values may be modi ed to correspond to typical experimental conditions. As an example, for the textile industry, the KL factor should be 2, and the Kc and KH factors should be 1. For all other applications a value of 1 is recommended for all parametric factors [38].

1.9.3 Cylindrical L u v and L a b Color Space Any color expressed in the rectangular coordinate system of axes L u v or L a b can also be expressed in terms of cylindrical coordinates with the perceived lightness L and the psychometric correlates of chroma and hue  and that in the [37]. The chroma in the L u v space is denoted as Cuv     L a b space Cab . They are de ned as [5]:

Cuv Cab

=

(u )2 + (v  )2

=

(a )2 + (b )2

 21

 12

(1.100) (1.101)

The hue angles are useful quantities in specifying hue numerically [5], [37]. Hue angle huv in the L u v space and hab in the L a b space are de ned as [5]: 

v  u   hab = arctan ab The saturation suv in the L u v space is given by:  suv = CLuv huv

= arctan

(1.102) (1.103)

(1.104)

1.9.4 Applications of L u v and L a b spaces The L u v and L a b spaces are very useful in applications where precise

quanti cation of perceptual distance between two colors is necessary [5]. For example in the realization of perceptual based vector order statistics lters. If a degraded color image has to be ltered so that it closely resembles, in perception, the un-degraded original image, then a good criterion to optimize is the perceptual error between the output image and the un-degraded original image. Also, they are very useful for evaluation of perceptual closeness or perceptual error between two color images [4]. Precise evaluation of perceptual closeness between two colors is also essential in color matching systems used in various applications such as multimedia products, image arts, entertainment, and advertisements [6], [14], [22].

38

L u v and L a b color spaces are extremely useful in imaging systems where exact perceptual reproduction of color images (color consistency) across the entire system is of primary concern rather than real-time or simple computing. Applications include advertising, graphic arts, digitized or animated paintings etc. Suppose, an imaging system consists of various color devices, for example video camera/digital scanner, display device, and printer. A painting has to be digitized, displayed, and printed. The displayed and printed versions of the painting must appear as close as possible to the original image. L u v and L a b color spaces are the best to work with in such cases. Both these systems have been successfully applied to image coding for printing [4], [16]. Color calibration is another important process related to color consistency. It basically equalizes an image to be viewed under di erent illumination or viewing conditions. For instance, an image of a target object can only be taken under a speci c lighting condition in a laboratory. But the appearance of this target object under normal viewing conditions, say in ambient light, has to be known. Suppose, there is a sample object whose image under ambient light is available. Then the solution is to obtain the image of the sample object under the same speci c lighting condition in the laboratory. Then a correction formula can be formulated based on the images of the sample object obtained and these can be used to correct the target object for the ambient light [14]. Perceptual based color spaces, such as L a b , are very useful for computations in such problems [31], [37]. An instance, where such calibration techniques have great potential, is medical imaging in dentistry. Perceptually uniform color spaces, with the Euclidean metric to quantify color distances, are particularly useful in color image segmentation of natural scenes using histogram-based or clustering techniques. A method of detecting clusters by tting to them some circular-cylindrical decision elements in the L a b uniform color coordinate system was proposed in [39], [40]. The method estimates the clusters' color distributions without imposing any constraints on their forms. Boundaries of the decision elements are formed with constant lightness and constant chromaticity loci. Each boundary is obtained using only 1-D histograms of the L HÆ C cylindrical coordinates of the image data. The cylindrical coordinates L HÆ C [30] of the L a b color space known as lightness, hue, and chroma, are given by:

L = L H Æ = arctan(b=a) C  = (a2 + b2)1=2

(1.105) (1.106) (1.107)

The L a b space is often used in color management systems (CMS). A color management system handles the color calibration and color consistency issues. It is a layer of software resident on a computer that negotiates color reproduction between the application and color devices. Color management systems perform the color transformations necessary to exchange accurate

39

color between diverse devices [4], [43]. A uniform, based on CIE L u v , color space named TekHVC was proposed by Tektronix as part of its commercially available CMS [45]. 1.10 The Munsell Color Space

The Munsell color space represents the earliest attempt to organize color perception into a color space [5], [14], [46]. The Munsell space is de ned as a comparative reference for artists. Its general shape is that of a cylindrical representation with three dimensions roughly corresponding to the perceived lightness, hue and saturation. However, contrary to the HSV or HSI color models where the color solids were parameterized by hue, saturation and perceived lightness, the Munsell space uses the method of the color atlas, where the perception attributes are used for sampling. The fundamental principle behind the Munsell color space is that of equality of visual spacing between each of the three attributes. Hue is scaled according to some uniquely identi able color. It is represented by a circular band divided into ten sections. The sections are de ned as red, yellow-red, yellow, green-yellow, green, blue-green, blue, purple-blue, purple and red-purple. Each section can be further divided into ten subsections if ner divisions of hue are necessary. A chromatic hue is described according to its resemblance to one or two adjacent hues. Value in the Munsell color space refers to a color's lightness or darkness and is divided into eleven sections numbered zero to ten. Value zero represents black while a value of ten represent white. The chroma de nes the color's strength. It is measured in numbered steps starting at one with weak colors having low chroma values. The maximum possible chroma depends on the hue and the value being used. As can be seen in Fig. (1.14), the vertical axis of the Munsell color solid is the line of V values ranging from black to white. Hue changes along each of the circles perpendicular to the vertical axis. Finally, chroma starts at zero on the V axis and changes along the radius of each circle. The Munsell space is comprised of a set of 1200 color chips each assigned a unique hue, value and chroma component. These chips are grouped in such a way that they form a three dimensional solid, which resembles a warped sphere [5]. There are di erent editions of the basic Munsell book of colors, with di erent nishes (glossy or matte), di erent sample sizes and a di erent number of samples. The glossy nish collection displays color point chips arranged on 40 constant-hue charts. On each constant-hue chart the chips are arranged in rows and columns. In this edition the colors progress from light at the top of each chart to very dark at the bottom by steps which are intended to be perceptually equal. They also progress from achromatic colors, such as white and gray at the inside edge of the chart, to chromatic colors at the outside edge of the chart by steps that are also intended to be

40

perceptually equal. All the charts together make up the color atlas, which is the color solid of the Munsell system. Value

Y

GY

Hue

G

YR Chroma R

BG

RP

B PB

P

The Munsell color system Fig. 1.14.

Although the Munsell book of colors can be used to de ne or name colors, in practice is not used directly for image processing applications. Usually stored image data, most often in RGB format, are converted to the Munsell coordinates using either lookup tables or closed formulas prior to the actual application. The conversion from the RGB components to the Munsell hue (H ), value (V ) corresponding to luminance and chroma (C ) corresponding to saturation, can be achieved by using the following mathematical algorithm [47]: x = 0:620R + 0:178G + 0:204B

y = 0:299R + 0:587G + 0:144B z = 0:056G + 0:942B

(1.108)

A nonlinear transformation is applied to the intermediate values as follows: p = f (x) f (y) (1.109)

q = 0:4(f (z) f (y)) (1.110) 1 where f (r) = 11:6r 3 1:6. Further the new variables are transformed to: s = (a + bcos())p (1.111) t = (c + dsin())q (1.112) where  = tan 1 ( pq ), a = 8:880, b = 0:966, c = 8:025 and d = 2:558. Finally,

the requested values are obtained as:

H=

s t

arctan( )

(1.113)

41

V = f (y) and

(1.114)

C = (s2 + t2) 2

1

(1.115)

Alternatively, conversion from RGB, or other color spaces, to the Munsell color space can be achieved through look-up tables and published charts [5]. In summary, the Munsell color system is an attempt to de ne color in terms of hue, chroma and lightness parameters based on subjective observations rather than direct measurements or controlled perceptual experiments. Although it has been found that the Munsell space is not as perceptually uniform as originally claimed and, despite the fact that it cannot directly integrate with additive color schemes, it is still in use today despite attempts to introduce colorimetric models for its replacement. 1.11 The Opponent Color Space

The opponent color space family is a set of physiologically motivated color spaces inspired by the physiology of the human visual system. According to the theory of color vision discussed in [48] the human vision system can be expressed in terms of opponent hues, yellow and blue on one hand and green and red on the other, which cancel each other when superimposed. In [49] an experimental procedure was developed which allowed researchers to quantitatively express the amounts of each of the basic hues present in any spectral stimulus. The color model of [50], [51], [52], [44] suggests the transformation of the RGB `cone' signals to three channels, one achromatic channel (I) and two opponent color channels (RG, YB) according to:

RG = R G Y B = 2B R G I =R+G+B

(1.116) (1.117) (1.118)

At the same time a set of e ective color features was derived by systematic experiments of region segmentation [53]. According to the segmentation procedure of [53] the color which has the deep valleys on its histogram and has the largest discriminant power to separate the color clusters in a given region need not be the R, G, and B color features. Since a feature is said to have large discriminant power if its variance is large, color features with large discriminant power were derived by utilizing the Karhunen-Loeve (KL) transformation. At every step of segmenting a region, calculation of the new color features is done for the pixels in that region by the KL transform of R, G, and B signals. Based on extensive experiments [53], it was concluded

42 Cones

Opponent Signals

R R+G+B

G

R-G

B

2B-R-G

Fig. 1.15.

The Opponent color stage of the human visual system

that three color features constitute an e ective set of features for segmenting color images, [54], [55]:

I 1 = (R + G3 + B) I 2 = (R B) I 3 = (2G 2R B)

(1.119) (1.120) (1.121)

In the opponent color space hue could be coded in a circular format ranging through blue, green, yellow, red and black to white. Saturation is de ned as distance from the hue circle making hue and saturation speciable with in color categories. Therefore, although opponent representation are often thought as a linear transforms of RGB space, the opponent representation is much more suitable for modeling perceived color than RGB is [14]. 1.12 New Trends

The plethora of color models available poses application diÆculties. Since most of them are designed to perform well in a speci c application, their performance deteriorates rapidly under di erent operating conditions. Therefore, there is a need to merge the di erent (mainly device dependent) color spaces into a single standard space. The di erences between the monitor RGB space and device independent spaces, such as the HVS and the CIE L a b spaces impose problems in applications, such as multimedia database navigation and face recognition primarily due to the complexity of the operations needed to support the transform from/to device dependent color spaces. To overcome such problems and to serve the needs of network-centric applications and WWW-based color imaging systems, a new standardized color space based on a colorimetric RGB (sRGB) space has recently been proposed [56]. The aim of the new color space is to complement the current color space

43

management strategies by providing a simple, yet eÆcient and cost e ective method of handling color in the operating systems, device drivers and the Web using a simple and robust device independent color de nition. Since most computer monitors are similar in their key color characteristics and the RGB space is the most suitable color space for the devices forming a modern computer-based imaging systems, the colorimetric RGB space seems to be the best candidate for such a standardized color space. In de ning a colorimetric color space, two factors are of paramount importance:

 

the viewing environment parameters with its dependencies on the Human Visual System the standard device space colorimetric de nitions and transformations [56]

The viewing environment descriptions contain all the necessary transforms needed to support conversions between standard and target viewing environments. On the other hand, the colorimetric de nitions provide the transforms necessary to convert between the new sRGB and the CIE-XYZ color space. The reference viewing environment parameters can be found in [56] with the sRGB tristimulus values calculated from the CIE-XYZ values according to the following transform: 2

RsRGB 3 2 3:2410 1:5374 0:4986 3 2 X 3 4 GsRGB 5 = 4 0:9692 1:8760 0:0416 5 4 Y 5 BsRGB 0:0556 0:2040 1:0570 Z

(1.122)

In practical image processing systems negative sRGB tristimulus values and sRGB values greater than 1 are not retained and typically removed by utilizing some form of clipping. In the sequence, the linear tristimulus values are transformed to nonlinear sR0 G0 B0 as follows: 1. If RsRGB ; GsRGB ; BsRGB 0:0034 then

sR0 = 12:92RsRGB sG0 = 12:92GsRGB sB0 = 12:92BsRGB 2. else if RsRGB ; GsRGB ; BsRGB > 0:0034 then sR0 = 1:055RsRGB 21::40 0:055 sG0 = 1:055GsRGB 12::04 0:055 sB0 = 1:055BsRGB 12::04 0:055

(1.123) (1.124) (1.125) (1.126) (1.127) (1.128)

44

The e ect of the above transformation is to closely t a straightforward value of 2.2 with a slight o set to allow for invertibility in integer mathematics. The nonlinear R0 G0 B0 values are then converted to digital values with a black digital count of 0 and a white digital count of 255 for 24-bit coding as follows:

sRd = 255:0sR0 sGd = 255:0sG0 sBd = 255:0sB0

(1.129) (1.130) (1.131)

The backwards transform is de ned as follows:

sR0 = sRd + 255:0 sG0 = sGd + 255:0 sB0 = sBd + 255:0

(1.132) (1.133) (1.134)

and

1. if RsRGB ; GsRGB ; BsRGB 0:03928 then

RsRGB = sR0 + 12:92 GsRGB = sG0 + 12:92 BsRGB = sB0 + 12:92 2. else if RsRGB ; GsRGB ; BsRGB > 0:03928 then 0 0:055 2:4 RsRGB = ( sR 1+:055 ) 0 0:055 2:4 GsRGB = ( sG 1+:055 ) 0 0:055 2:4 ) BsRGB = ( sB 1+:055

with

X 3 2 0:4124 0:3576 0:1805 3 2 RsRGB 3 4 Y 5 = 4 0:2126 0:7152 0:0722 5 4 GsRGB 5 Z 0:0193 0:1192 0:9505 BsRGB

(1.135) (1.136) (1.137)

(1.138) (1.139) (1.140)

2

(1.141)

The addition of a new standardized color space which supports Webbased imaging systems, device drivers, printers and monitors complementing the existing color management support can bene t producers and users alike by presenting a clear path towards an improved color management system.

45 1.13 Color Images

Color imaging systems are used to capture and reproduce the scenes that humans see. Imaging systems can be built using a variety of optical, electronic or chemical components. However, all of them perform three basic operations, namely: (i) image capture, (ii) signal processing, and (iii) image formation. Color-imaging devices exploit the trichromatic theory of color to regulate how much light from the three primary colors is absorbed or re ected to produce a desired color. There are a number of ways to acquiring and reproducing color images, including but not limited to:

 Photographic lm. The lm which is used by conventional cameras con 

 

tains three emulation layers, which are sensitive to red and blue light, which enters through the camera lens. Digital cameras. Digital cameras use a CCD to capture image information. Color information is captured by placing red, green and blue lters before the CCD and storing the response to each channel. Cathode-Ray tubes. CRTs are the display device used in televisions and computer monitors. They utilize a extremely ne array of phosphors that emit red, green and blue light at intensities governed by an electron gun, in accordance to an image signal. Due to the close proximity of the phosphors and the spatial ltering characteristics of the human eye, the emitted primary colors are mixed together producing an overall color. Image scanners. The most common method of scanning color images is the utilization of three CCD's each with a lter to capture red, green and blue light re ectance. These three images are then merged to create a copy of the scanned image. Color printers. Color printers are the most common method of attaining a printed copy of a captured color image. Although the trichromatic theory is still implemented, color in this domain is subtractive. The primaries which are used are usually cyan, magenta and yellow. The amount of the three primaries which appear on the printed media govern how much light is re ected.

1.14 Summary

In this chapter the phenomenon of color was discussed. The basic color sensing properties of the human visual system and the CIE standard color speci cation system XYZ were described in detail. The existence of three types of spectral absorption cones in the human eyes serves as the basis of the trichromatic theory of color, according to which all visible colors can be created by combining three . Thus, any color can be uniquely represented by a three dimensional vector in a color model de ned by the three primary colors.

46 Table 1.3.

Color Model

Color System RGB R0 G0 B0 XYZ YIQ YCC I1I2I3 HSV HSI HLS L u v L a b Munsell

Fig. 1.16.

Transform (from RGB) non linear linear linear linear linear non linear non linear non linear non linear non linear non linear

Component correlation highly correlated correlated uncorrelated uncorrelated correlated correlated correlated correlated correlated correlated correlated

A taxonomy of color models

Color speci cation models are of paramount importance in applications where eÆcient manipulation and communication of images and video frames are required. A number of color speci cation models are in use today. Examples include color spaces, such as the RGB, R0 G0 B0 , YIQ, HSI, HSV, HLS,L u v , and L a b . The color model is a mathematical representation of spectral colors in a nite dimensional vector space. In each one of them the actual color is reconstructed by combining the basis elements of the vector

References

47

Color Spaces Models Colorimetric Device-oriented

User-oriented Munsell

XYZ - non-uniform spaces RGB, YIQ, YCC - uniform spaces L a b , L u v HSI, HSV, HLS, I1I2I3

Applications

colorimetric calculations storage, processing, analysis coding, color TV, storage (CD-ROM) color di erence evaluation analysis, color management systems human color perception multimedia, computer graphics human visual system

spaces, the so called primary colors. By de ning di erent primary colors for the representation of the system di erent color models can be devised. One important aspect is the color transformation, the change of coordinates from one color system to another (see Table 1.3). Such a transformation associates to each color in one system a color in the other model. Each color model comes into existence for a speci c application in color image processing. Unfortunately, there is no technique for determining the optimum coordinate model for all image processing applications. For a speci c application the choice of a color model depends on the properties of the model and the design characteristics of the application. Table 1.14 summarizes the most popular color systems and some of their applications. References

1. Gonzalez, R., Woods, R.E. (1992): Digital Image Processing. Addisson Wesley, Reading MA. 2. Robertson, P., Schonhut, J. (1999): Color in computer graphics. IEEE Computer Graphics and Applications, 19(4), 18-19. 3. MacDonald, L.W. (1999): Using color e ectively in computer graphics. IEEE Computer Graphics and Applications, 19(4), 20-35. 4. Poynton, C.A. (1996): A Technical Introduction to Digital Video. Prentice Hall, Toronto, also available at http://www.inforamp.net/poynton/Poynton{ Digital-Video.html . 5. Wyszecki, G., Stiles, W.S. (1982): Color Science, Concepts and Methods, Quantitative Data and Formulas. John Wiley, N.Y. , 2nd Edition. 6. Hall, R.A. (1981): Illumination and Color in Computer Generated Imagery. Springer Verlag, New York, N.Y. 7. Hurlbert, A. (1989): The Computation of Color. Ph.D Dissertation, Massachusetts Institute of Technology. 8. Hurvich, Leo M. (1981): Color Vision. Sinauer Associates, Sunderland MA. 9. Boynton, R.M. (1990): Human Color Vision. Halt, Rinehart and Winston. 10. Gomes, J., Velho, L. (1997): Image Processing for Computer Graphics. Springer Verlag, New York, N.Y., also available at http://www.springerny.com/catalog/np/mar97np/DATA/0-387-94854-6.html .

48 11. Fairchild, M.D. (1998): Color Appearance Models. Addison-Wesley, Readings, MA. 12. Sharma, G., Yrzel, M.J., Trussel, H.J. (1998): Color imaging for multimedia. Proceedings of the IEEE, 86(6): 1088{1108. 13. Sharma, G., Trussel, H.J. (1997): Digital color processing. IEEE Trans. on Image Processing, 6(7): 901-932. 14. Lammens, J.M.G. (1994): A Computational Model for Color Perception and Color Naming. Ph.D Dissertation, State University of New York at Bu alo, Bu alo, New York. 15. Johnson, G.M., Fairchild, M.D. (1999): Full spectral color calculations in realistic image synthesis. IEEE Computer Graphics and Applications, 19(4), 47-53. 16. Lu, Guoyun (1996): Communication and Computing for Distributed Multimedia Systems. Artech House Publishers, Boston, MA. 17. Kubinger, W., Vincze, M., Ayromlou, M. (1998): The role of gamma correction in colour image processing. in Proceedings of the European Signal Processing Conference, 2: 1041{1044. 18. Luong, Q.T. (1993): Color in computer vision. in Handbook of Pattern Recognition and Computer Vision, Word Scienti c Publishing Company): 311{368. 19. Young, T. (1802): On the theory of light and colors. Philosophical Transactions of the Royal Society of London, 92: 20{71. 20. Maxwell, J.C. (1890): On the theory of three primary colors. Science Papers 1, Cambridge University Press: 445{450. 21. Padgham, C.A., Saunders, J.E. (1975): The Perception of Light and Color. Academic Press, New York, N.Y. 22. Judd, D.B., Wyszecki, G. (1975): Color in Business, Science and Industry. John Wiley, New York, N.Y. 23. Foley, J.D., vanDam, A., Feiner, S.K., Hughes, J.F. (1990): Fundamentals of Interactive Computer Graphics. Addison Wesley, Reading, MA. 24. CCIR (1990): CCIR Recommendation 709. Basic parameter values for the HDTV standard for studio and for international program exchange. Geneva, Switcherland. 25. CIE (1995): CIE Publication 116. Industrial color-di erence evaluation. Vienna, Austria. 26. Poynton, C.A. (1993): Gamma and its disguises. The nonlinear mappings of intensity in perception, CRTs, lm and video. SMPTE Journal: 1099{1108. 27. Kasson M.J., Ploa e, W. (1992): An analysis of selected computer interchange color spaces. ACM Transaction of Graphics, 11(4): 373-405. 28. Shih, Tian-Yuan (1995): The reversibility of six geometric color spaces. Photogrammetric Engineering and Remote Sensing, 61(10): 1223{1232. 29. Levkowitz H., Herman, G.T. (1993): GLHS: a generalized lightness, hue and saturation color model. Graphical Models and Image Processing, CVGIP-55(4): 271{285. 30. McLaren, K. (1976): The development of the CIE L a b uniform color space. J. Soc. Dyers Colour, 338{341. 31. Hill, B., Roer, T., Vorhayen, F.W. (1997): Comparative analysis of the quantization of color spaces on the basis of the CIE-Lab color di erence formula. ACM Transaction of Graphics, 16(1): 110{154. 32. Hall, R. (1999): Comparing spectral color computation methods. IEEE Computer Graphics and Applications, 19(4), 36-44. 33. Hague, G.E., Weeks, A.R., Myler, H.R. (1995): Histogram equalization of 24 bit color images in the color di erence color space. Journal of Electronic Imaging, 4(1), 15-23.

References

49

34. Weeks, A.R. (1996): Fundamentals of Electronic Image Processing. SPIE Press, Piscataway, New Jersey. 35. Benson, K. B. (1992): Television Engineering Handbook. McGraw-Hill, London, U.K. 36. Smith, A.R. (1978): Color gamut transform pairs. Computer Graphics (SIGGRAPH'78 Proceedings), 12(3): 12{19. 37. Healey, C.G., Enns, J.T. (1995): A perceptual color segmentation algorithm. Technical Report, Department of Computer Science, University of British Columbia, Vancouver. 38. Luo, M. R. (1998): Color science. in Sangwine, S.J., Horne, R.E.N. (eds.), The Colour Image Processing Handbook, 26{52, Chapman & Hall, Cambridge, Great Britain. 39. Celenk, M. (1988): A recursive clustering technique for color picture segmentation. Proceedings of the Int. Conf. on Computer Vision and Pattern Recognition, 1: 437{444. 40. Celenk, M. (1990): A color clustering technique for image segmentation. Computer Vision, Graphics, and Image Processing, 52: 145{170. 41. Cong, Y. (1998): Intelligent Image Databases. Kluwer Academic Publishers, Boston, Ma. 42. Ikeda, M. (1980): Fundamentals of Color Technology. Asakura Publishing, Tokyo, Japan. 43. Rhodes, P. A. (1998): Colour management for the textile industry. in Sangwine, S.J., Horne, R.E.N. (eds.), The Colour Image Processing Handbook, 307-328, Chapman & Hall, Cambridge, Great Britain. 44. Palus, H. (1998): Colour spaces. in Sangwine, S.J., Horne, R.E.N. (eds.), The Colour Image Processing Handbook, 67{89, Chapman & Hall, Cambridge, Great Britain. 45. Tektronix (1990): TekColor Color Management System: System Implementers Manual. Tektronix Inc. 46. Birren, F. (1969): Munsell: A Grammar of Color. Van Nostrand Reinhold, New York, N.Y. 47. Miyahara, M., Yoshida, Y. (1988): Mathematical transforms of (R,G,B) colour data to Munsell (H,V,C) colour data. Visual Communications and Image Processing, 1001, 650{657. 48. Hering, E. (1978): Zur Lehe vom Lichtsinne. C. Gerond's Sohn, Vienna, Austria. 49. Jameson, D., Hurvich, L.M. (1968): Opponent-response functions related to measured cone photo pigments. Journal of the Optical Society of America, 58: 429{430. 50. de Valois, R.L., De Valois, K.K. (1975): Neural coding of color. in Carterette, E.C., Friedman, M.P. (eds.), Handbook of Perception. Volume 5, Chapter 5, 117{166, Academic Press, New York, N.Y. 51. de Valois, R.L., De Valois, K.K. (1993): A multistage color model. Vision Research 33(8): 1053{1065. 52. Holla, K. (1982): Opponent colors as a 2-dimensional feature within a model of the rst stages of the human visual system. Proceedings of the 6th Int. Conf. on Pattern Recognition, 1: 161{163. 53. Ohta, Y., Kanade, T., Sakai, T. (1980): Color information for region segmentation. Computer Graphics and Image Processing, 13: 222{241. 54. von Stein, H.D., Reimers, W. (1983): Segmentation of color pictures with the aid of color information and spatial neighborhoods. Signal Processing II: Theories and Applications, 1: 271{273. 55. Tominaga S. (1986): Color image segmentation using three perceptual attributes. Proceedings of CVPR'86, 1: 628-630.

50 56. Stockes, M., Anderson, M., Chandrasekar, Sri., Motta, Ricardo (1997): A standard default color space for the Internet sRGB. International Color Consortium (ICC), contributed document electronic reprint (http://www.color.org).

K.N. Plataniotis and A.N. Venetsanopoulos

Color Image Processing and Applications Engineering { Monograph (English) November 9, 1999

Springer-Verlag

Berlin Heidelberg NewYork London Paris Tokyo Hong Kong Barcelona Budapest

V

Preface The perception of color is of paramount importance to humans since they routinely use color features to sense the environment, recognize objects and convey information. Color image processing and analysis is concerned with the manipulation of digital color images on a computer utilizing digital signal processing techniques. Like most advanced signal processing techniques, it was, until recently, con ned to academic institutions and research laboratories that could a ord the expensive image processing hardware needed to handle the processing overhead required to process large numbers of color images. However, with the advent of powerful desktop computers and the proliferation of image collection devices, such as digital cameras and scanners, color image processing techniques are now within the grasp of the general public. This book is aimed at researchers and practitioners that work in the area of color image processing. Its purpose is to ll an existing gap in scienti c literature by presenting the state of the art research in the area. It is written at a level which can be easily understood by a graduate student in an Electrical and Computer Engineering or Computer Science program. Therefore, it can be used as a textbook that covers part of a modern graduate course in digital image processing or multimedia systems. It can also be used as a textbook for a graduate course on digital signal processing since it contains algorithms, design criteria and architectures for processing and analysis systems. The book is structured into four parts. The rst, Chapter 1, deals with color principles and is aimed at readers who have very little prior knowledge of color science. Readers interested in color image processing may read the second part of the book (Chapters 2-5). It covers the major, although somewhat mature, elds of color image processing. Color image processing is characterized by a large number of algorithms that are speci c solutions to speci c problems, for example vector median lters have been developed to remove impulsive noise from images. Some of them are mathematical or content independent operations that are applied to each and every pixel, such as morphological operators. Others are algorithmic in nature, in the sense that a recursive strategy may be necessary to nd edge pixels in an image. The third part of the book, Chapters 6-7, deals with color image analysis and coding techniques. The ultimate goal of color image analysis is to enhance human-computer interaction. Recent applications of image analysis includes compression of color images either for transmission across the internetwork or coding of video images for video conferencing. Finally, the fourth part (Chapter 8) covers emerging applications of color image processing. Color is useful for accessing multimedia databases. Local color information, for example in the form of color histograms, can be used to index and retrieve images from the database. Color features can also be used to identify objects of interest, such as human faces and hand areas, for applications ranging from video conferencing, to perceptual interfaces and virtual environments. Because of the

VI

dual nature of this investigation, processing and analysis, the logical dependence of the chapters is somewhat unusual. The following diagram can help the reader chart the course.

Logical dependence between chapters

Acknowledgment We acknowledge a number of individuals who have contributed in di erent ways to the preparation of this book. In particular, we wish to extend our appreciation to Prof. M. Zervakis for contributing the image restoration section, and to Dr. N. Herodotou for his informative inputs and valuable suggestions in the emerging applications chapter. Three graduate students of ours also merit special thanks. Shu Yu Zhu for her input and high quality gures included in the color edge detection chapter, Ido Rabinovitch for his contribution to the color image coding section and Nicolaos Ikonomakis for his valuable contribution in the color segmentation chapter. We also thank

VII

Nicolaos for reviewing the chapters of the book and helping with the Latex formating of the manuscript. We also grateful to Terri Vlassopoulos for proofreading the manuscript, and Frank Holzwarth and Gabrielle Mass of Springer Verlag for their help during the preparation of the book. Finally, we are indebted to Peter Androutsos who helped us tremendously on the development of the companion software.

VIII

Contents

1. Color Spaces : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1.1 1.2 1.3 1.4 1.5

1.6 1.7 1.8 1.9

1.10 1.11 1.12 1.13 1.14

Basics of Color Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The CIE Chromaticity-based Models . . . . . . . . . . . . . . . . . . . . . . The CIE-RGB Color Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gamma Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear and Non-linear RGB Color Spaces . . . . . . . . . . . . . . . . . . 1.5.1 Linear RGB Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Non-linear RGB Color Space. . . . . . . . . . . . . . . . . . . . . . . Color Spaces Linearly Related to the RGB . . . . . . . . . . . . . . . . . The YIQ Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The HSI Family of Color Models . . . . . . . . . . . . . . . . . . . . . . . . . Perceptually Uniform Color Spaces . . . . . . . . . . . . . . . . . . . . . . . 1.9.1 The CIE L u v Color Space . . . . . . . . . . . . . . . . . . . . . . 1.9.2 The CIE L a b Color Space . . . . . . . . . . . . . . . . . . . . . . 1.9.3 Cylindrical L u v and L ab Color Space . . . . . . . . . . 1.9.4 Applications of L u v and L a b spaces . . . . . . . . . . . The Munsell Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Opponent Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Trends. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 4 9 13 16 16 17 20 23 25 32 33 35 37 37 39 41 42 45 45

2. Color Image Filtering : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 51 2.1 2.2 2.3 2.4 2.5

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modeling Sensor Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modeling Transmission Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multivariate Data Ordering Schemes . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Marginal Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Conditional Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Partial Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Reduced Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 A Practical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Vector Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 52 53 55 58 59 62 62 63 67 69

X

Contents

2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15

The Distance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 The Similarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Filters Based on Marginal Ordering . . . . . . . . . . . . . . . . . . . . . . . 77 Filters Based on Reduced Ordering . . . . . . . . . . . . . . . . . . . . . . . 81 Filters Based on Vector Ordering . . . . . . . . . . . . . . . . . . . . . . . . . 89 Directional-based Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

3. Adaptive Image Filters : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 107

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.2 The Adaptive Fuzzy System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.2.1 Determining the Parameters . . . . . . . . . . . . . . . . . . . . . . . 112 3.2.2 The Membership Function . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.2.3 The Generalized Membership Function . . . . . . . . . . . . . . 115 3.2.4 Members of the Adaptive Fuzzy Filter Family . . . . . . . . 116 3.2.5 A Combined Fuzzy Directional and Fuzzy Median Filter122 3.2.6 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 3.2.7 Application to 1-D Signals . . . . . . . . . . . . . . . . . . . . . . . . . 128 3.3 The Bayesian Parametric Approach . . . . . . . . . . . . . . . . . . . . . . . 131 3.4 The Non-parametric Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3.5 Adaptive Morphological Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 3.5.2 Computation of the NOP and the NCP . . . . . . . . . . . . . 152 3.5.3 Computational Complexity and Fast Algorithms . . . . . 154 3.6 Simulation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

4. Color Edge Detection : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 179

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 4.2 Overview Of Color Edge Detection Methodology . . . . . . . . . . . 181 4.2.1 Techniques Extended From Monochrome Edge Detection181 4.2.2 Vector Space Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 183 4.3 Vector Order Statistic Edge Operators . . . . . . . . . . . . . . . . . . . . 189 4.4 Di erence Vector Operators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 4.5 Evaluation Procedures and Results . . . . . . . . . . . . . . . . . . . . . . . 197 4.5.1 Probabilistic Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 4.5.2 Noise Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 4.5.3 Subjective Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

Contents

XI

5. Color Image Enhancement and Restoration : : : : : : : : : : : : : : : 209 5.1 5.2 5.3 5.4 5.5

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Color Image Restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Restoration Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Algorithm Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 5.5.1 De nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 5.5.2 Direct Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 5.5.3 Robust Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

6. Color Image Segmentation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 237

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 6.2 Pixel-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 6.2.1 Histogram Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 6.2.2 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 6.3 Region-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 6.3.1 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6.3.2 Split and Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6.4 Edge-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 6.5 Model-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 6.5.1 The Maximum A-posteriori Method . . . . . . . . . . . . . . . . 254 6.5.2 The Adaptive MAP Method . . . . . . . . . . . . . . . . . . . . . . . 255 6.6 Physics-based Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 6.7 Hybrid Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 6.8 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 6.8.1 Pixel Classi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 6.8.2 Seed Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 6.8.3 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 6.8.4 Region Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.8.5 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 6.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

7. Color Image Compression : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 279 7.1 7.2 7.3 7.4

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Image Compression Comparison Terminology . . . . . . . . . . . . . . 282 Image Representation for Compression Applications . . . . . . . . 285 Lossless Waveform-based Image Compression Techniques . . . . 286 7.4.1 Entropy Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 7.4.2 Lossless Compression Using Spatial Redundancy . . . . . 288 7.5 Lossy Waveform-based Image Compression Techniques . . . . . . 290 7.5.1 Spatial Domain Methodologies . . . . . . . . . . . . . . . . . . . . . 290 7.5.2 Transform Domain Methodologies . . . . . . . . . . . . . . . . . . 292 7.6 Second Generation Image Compression Techniques . . . . . . . . . 304 7.7 Perceptually Motivated Compression Techniques . . . . . . . . . . . 307

XII

Contents

7.7.1 Modeling the Human Visual System . . . . . . . . . . . . . . . . 307 7.7.2 Perceptually Motivated DCT Image Coding . . . . . . . . . 311 7.7.3 Perceptually Motivated Wavelet-based Coding . . . . . . . 313 7.7.4 Perceptually Motivated Region-based Coding . . . . . . . . 317 7.8 Color Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 7.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324

8. Emerging Applications : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 329

8.1 Input Analysis Using Color Information . . . . . . . . . . . . . . . . . . . 331 8.2 Shape and Color Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 8.2.1 Fuzzy Membership Functions . . . . . . . . . . . . . . . . . . . . . . 338 8.2.2 Aggregation Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 8.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 8.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

A. Companion Image Processing Software : : : : : : : : : : : : : : : : : : : 349 A.1 A.2 A.3 A.4

Image Filtering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Image Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Noise Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

Index : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 353

List of Figures

1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16

The visible light spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The CIE XYZ color matching functions . . . . . . . . . . . . . . . . . . . . . . . The CIE RGB color matching functions . . . . . . . . . . . . . . . . . . . . . . . The chromaticity diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Maxwell triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The RGB color model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear to Non-linear Light Transformation . . . . . . . . . . . . . . . . . . . . . Non-linear to linear Light Transformation . . . . . . . . . . . . . . . . . . . . . Transformation of Intensities from Image Capture to Image Display The HSI Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The HLS Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The HSV Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The L u v Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Munsell color system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Opponent color stage of the human visual system. . . . . . . . . . . A taxonomy of color models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10

Simulation I: Filter outputs (1st component) . . . . . . . . . . . . . . . . . . . 129 Simulation I: Filter outputs (2nd component) . . . . . . . . . . . . . . . . . . 129 Simulation II: Actual signal and noisy input (1st component) . . . . 130 Simulation II: Actual signal and noisy input (2nd component) . . . . 131 Simulation II: Filter outputs (1st component) . . . . . . . . . . . . . . . . . . 132 Simulation II: Filter outputs (2nd component) . . . . . . . . . . . . . . . . . . 132 A owchart of the NOP research algorithm . . . . . . . . . . . . . . . . . . . . 155 The adaptive morphological lter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 `Peppers' corrupted by 4% impulsive noise . . . . . . . . . . . . . . . . . . . . 169 `Lenna' corrupted with Gaussian noise  = 15 mixed with 2% impulsive noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 V MF of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 BV DF of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 HF of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 AHF of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 FV DF of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 ANNMF of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . 170 CANNMF of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . 170

3.11 3.12 3.13 3.14 3.15 3.16 3.17

1 7 7 9 10 11 18 19 19 26 30 31 34 40 42 46

XIV

List of Figures

BFMA of (3.9) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 V MF of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 BV DF of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . 171 HF of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 AHF of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 FV DF of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . 171 ANNMF of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . 171 CANNMF of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . 171 BFMA of (3.10) using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . 171

3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26 3.27 3.28 3.29 3.30

`Mandrill' - 10% impulsive noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 NOP-NCP ltering results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 V MF using 3X3 window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Mutistage Close-opening ltering results . . . . . . . . . . . . . . . . . . . . . . . 173

4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17

Edge detection by derivative operators . . . . . . . . . . . . . . . . . . . . . . . . 180 Sub-window Con gurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Test color image `ellipse' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Test color image ` ower' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Test color image `Lenna' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Edge map of `ellipse': Sobel detector . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Edge map of `ellipse': VR detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Edge map of `ellipse': DV detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Edge map of `ellipse': DV hv detector . . . . . . . . . . . . . . . . . . . . . . . . . 204 Edge map of ` ower': Sobel detector . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Edge map of ` ower': VR detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Edge map of ` ower': DV detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Edge map of ` ower': DVadap detector . . . . . . . . . . . . . . . . . . . . . . . . 205 Edge map of `Lenna': Sobel detector . . . . . . . . . . . . . . . . . . . . . . . . . 206 Edge map of `Lenna': VR detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Edge map of `Lenna': DV detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Edge map of `Lenna': DVadap detector . . . . . . . . . . . . . . . . . . . . . . . . 206

5.1 The original color image `mountain' . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.2 The histogram equalized color output . . . . . . . . . . . . . . . . . . . . . . . . . 215 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8

Partitioned image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Corresponding quad-tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 The HSI cone with achromatic region in yellow . . . . . . . . . . . . . . . . . 261 Original image. Achromatic pixels have intensity< 10, intensity> 90262 Saturation < 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Saturation < 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Saturation< 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Original image. Achromatic pixels have saturation< 10, intensity> 90 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.9 Intensity < 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

List of Figures

XV

6.10 Intensity < 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.11 Intensity < 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 6.12 Original image. Achromatic pixels have saturation< 10, intensity< 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 6.13 Intensity > 85. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 6.14 Intensity > 90. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 6.15 Intensity > 95. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 6.16 Original image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 6.17 Pixel classi cation with chromatic pixels in red and achromatic pixels in the original color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 6.18 Original image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 6.19 Pixel classi cation with chromatic pixels in tan and achromatic pixels in the original color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 6.20 Arti cial image with level 1, 2, and 3 seeds. . . . . . . . . . . . . . . . . . . . . 266 6.21 The region growing algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 6.22 Original 'Claire' image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.23 'Claire' image showing seeds with V AR = 0:2 . . . . . . . . . . . . . . . . . . 271 6.24 Segmented 'Claire' image (before merging) with Tchrom = 0:15 . . . 271 6.25 Segmented 'Claire' image (after merging) with Tchrom = 0:15 and Tmerge = 0:2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.26 Original 'Carphone' image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.27 'Carphone' image showing seeds with V AR = 0:2 . . . . . . . . . . . . . . . 272 6.28 Segmented 'Carphone' image (before merging) with Tchrom = 0:15 272 6.29 Segmented 'Carphone' image (after merging) with Tchrom = 0:15 and Tmerge = 0:2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.30 Original 'Mother-Daughter' image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 6.31 'Mother-Daughter' image showing seeds with V AR = 0:2 . . . . . . . . 274 6.32 Segmented 'Mother-Daughter' image (before merging) with Tchrom = 0:15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 6.33 Segmented 'Mother-Daughter' image (after merging) with Tchrom = 0:15 and Tmerge = 0:2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13

The zig-zag scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 DCT based coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Original color image `Peppers' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Image coded at a compression ratio 5 : 1. . . . . . . . . . . . . . . . . . . . . . . 299 Image coded at a compression ratio 6 : 1. . . . . . . . . . . . . . . . . . . . . . . 299 Image coded at a compression ratio 6:3 : 1 . . . . . . . . . . . . . . . . . . . . . 299 Image coded at a compression ratio 6:35 : 1 . . . . . . . . . . . . . . . . . . . . 299 Image coded at a compression ratio 6:75 : 1 . . . . . . . . . . . . . . . . . . . . 299 Subband coding scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Relationship between di erent scale subspaces. . . . . . . . . . . . . . . . . . 302 Multiresolution analysis decomposition . . . . . . . . . . . . . . . . . . . . . . . . 303 The wavelet-based scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Second generation coding schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

XVI

List of Figures

7.14 7.15 7.16 7.17

The human visual system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Overall operation of the processing module . . . . . . . . . . . . . . . . . . . . 318 MPEG-1: Coding module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 MPEG-1: Decoding module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12

Skin and Lip Clusters in the RGB color space . . . . . . . . . . . . . . . . . . 333 Skin and Lip Clusters in the L a b color space . . . . . . . . . . . . . . . . 333 Skin and Lip hue Distributions in the HSV color space . . . . . . . . . . 334 Overall scheme to extract the facial regions within a scene . . . . . . . 337 Template for hair color classi cation = R1 + R2 + R3 . . . . . . . . . . . 342 Carphone: Frame 80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Segmented frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Frames 20-95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Miss America: Frame 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Frames 20-120 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Akiyo: Frame 20. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Frames 20-110 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

A.1 A.2 A.3 A.4

Screenshot of the main CIPAView window at startup. . . . . . . . . . . . 349 Screenshot of Di erence Vector Mean edge detector being applied 351 Gray scale image quantized to 4 levels. . . . . . . . . . . . . . . . . . . . . . . . . 352 Screenshot of an image being corrupted by Impulsive Noise.. . . . . . 352

List of Tables

1.1 EBU Tech 3213 Primaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2 EBU Tech 3213 Primaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3 Color Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.1 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14

Noise Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Filters Compared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Subjective Image Evaluation Guidelines . . . . . . . . . . . . . . . . . . . . . . . 161 Figure of Merit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 NMSE(x10;2 ) for the RGB `Lenna' image, 33 window . . . . . . . . . 164 NMSE(x10;2) for the RGB `Lenna' image, 55 window . . . . . . . . . 165 NMSE(x10;2) for the RGB `peppers' image, 33 window . . . . . . . 165 NMSE(x10;2) for the RGB `peppers' image, 55 window . . . . . . . 166 NCD for the RGB `Lenna' image, 33 window . . . . . . . . . . . . . . . . . 166 NCD for the RGB `Lenna' image, 55 window . . . . . . . . . . . . . . . . . 167 NCD for the RGB `peppers' image, 33 window. . . . . . . . . . . . . . . . 167 NCD for the RGB `peppers' image, 55 window. . . . . . . . . . . . . . . . 168 Subjective Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Performance measures for the image Mandrill . . . . . . . . . . . . . . . . . . 172

4.1 4.2 4.3 4.4

Vector Order Statistic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Di erence Vector Operators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Numerical Evaluation with Synthetic Images . . . . . . . . . . . . . . . . . . . 200 Noise Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

6.1 Comparison of Chromatic Distance Measures . . . . . . . . . . . . . . . . . . 269 6.2 Color Image Segmentation Techniques . . . . . . . . . . . . . . . . . . . . . . . . 273 7.1 Storage requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 7.2 A taxonomy of image compression methodologies: First Generation283 7.3 A taxonomy of image compression methodologies: Second Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 7.4 Quantization table for the luminance component . . . . . . . . . . . . . . . 296 7.5 Quantization table for the chrominance components . . . . . . . . . . . . 296

XVIII List of Tables

7.6 The JPEG suggested quantization table . . . . . . . . . . . . . . . . . . . . . . . 312 7.7 Quantization matrix based on the contrast sensitivity function for 1.0 min/pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 8.1 Miss America (WidthHeight=360288):Shape & Color Analysis. 343

1. Color Spaces

1.1 Basics of Color Vision Color is a sensation created in response to excitation of our visual system by electromagnetic radiation known as light [1], [2], [3]. More speci c, color is the perceptual result of light in the visible region of the electromagnetic spectrum, having wavelengths in the region of 400nm to 700nm, incident upon the retina of the human eye. Physical power or radiance of the incident light is in a spectral power distribution (SPD), often divided into 31 components each representing a 10nm band [4]-[13].

Fig. 1.1. The visible light spectrum

The human retina has three types of color photo-receptor cells, called cones , which respond to radiation with somewhat di erent spectral response curves [4]-[5]. A fourth type of photo-receptor cells, called roads , are also present in the retina. These are e ective only at extremely low light levels, for example during night vision. Although rods are important for vision, they play no role in image reproduction [14], [15]. The branch of color science concerned with the appropriate description and speci cation of a color is called colorimetry [5], [10]. Since there are exactly three types of color photo-receptor cone cells, three numerical components are necessary and sucient to describe a color, providing that appropriate spectral weighting functions are used. Therefore, a color can be speci ed by a tri-component vector. The set of all colors form a vector space called color space or color model. The three components of a color can be de ned in many di erent ways leading to various color spaces [5], [9]. Before proceeding with color speci cation systems (color spaces), it is appropriate to de ne a few terms: Intensity (usually denoted I ), brightness

2

1. Color Spaces

(Br), luminance (Y ), lightness (L ), hue (H ) and saturation (S ), which are often confused or misused in the literature. The intensity (I ) is a measure, over some interval of the electromagnetic spectrum, of the ow of power that is radiated from, or incident on a surface and expressed in units of watts per square meter [4], [18], [16]. The intensity (I ) is often called a linear light measure and thus is expressed in units, such as watts per square meter [4], [5]. The brightness (Br) is de ned as the attribute of a visual sensation according to which an area appears to emit more or less light [5]. Since brightness perception is very complex, the Commission Internationale de L'Eclairage (CIE) de ned another quantity luminance (Y ) which is radiant power weighted by a spectral sensitivity function that is characteristic of human vision [5]. Human vision has a nonlinear perceptual response to luminance which is called lightness (L ). The nonlinearity is roughly logarithmic [4]. Humans interpret a color based on its lightness (L ), hue (H ) and saturation (S ) [5]. Hue is a color attribute associated with the dominant wavelength in a mixture of light waves. Thus hue represents the dominant color as perceived by an observer; when an object is said to be red, orange, or yellow the hue is being speci ed. In other words, it is the attribute of a visual sensation according to which an area appears to be similar to one of the perceived colors: red, yellow, green and blue, or a combination of two of them [4], [5]. Saturation refers to the relative purity or the amount of white light mixed with a hue. The pure spectrum colors are fully saturated and contain no white light. Colors such as pink (red and white) and lavender (violet and white) are less saturated, with the degree of saturation being inversely proportional to the amount of white light added [1]. A color can be de-saturated by adding white light that contains power at all wavelengths [4]. Hue and saturation together describe the chrominance. The perception of color is basically determined by luminance and chrominance [1]. To utilize color as a visual cue in multimedia, image processing, graphics and computer vision applications, an appropriate method for representing the color signal is needed. The di erent color speci cation systems or color models (color spaces or solids) address this need. Color spaces provide a rational method to specify, order, manipulate and e ectively display the object colors taken into consideration. A well chosen representation preserves essential information and provides insight to the visual operation needed. Thus, the selected color model should be well suited to address the problem's statement and solution. The process of selecting the best color representation involves knowing how color signals are generated and what information is needed from these signals. Although color spaces impose constraints on color perception and representation they also help humans perform important tasks. In particular, the color models may be used to de ne colors, discriminate between colors, judge similarity between color and identify color categories for a number of applications [12], [13].

1.1 Basics of Color Vision

3

Color model literature can be found in the domain of modern sciences, such as physics, engineering, arti cial intelligence, computer science, psychology and philosophy. In the literature four basic color model families can be distinguished [14]: 1. Colorimetric color models, which are based on physical measurements of spectral re ectance. Three primary color lters and a photo-meter, such as the CIE chromaticity diagram usually serve as the initial points for such models. 2. Psychophysical color models, which are based on the human perception of color. Such models are either based on subjective observation criteria and comparative references (e.g. Munsell color model) or are built through experimentation to comply with the human perception of color (e.g. Hue, Saturation and Lightness model). 3. Physiologically inspired color models, which are based on the three primaries, the three types of cones in the human retina. The Red-GreenBlue (RGB) color space used in computer hardware is the best known example of a physiologically inspired color model. 4. Opponent color models, which are based on perception experiments, utilizing mainly pairwise opponent primary colors, such as the YellowBlue and Red-Green color pairs. In image processing applications, color models can alternatively be divided into three categories. Namely: 1. Device-oriented color models, which are associated with input, processing and output signal devices. Such spaces are of paramount importance in modern applications, where there is a need to specify color in a way that is compatible with the hardware tools used to provide, manipulate or receive the color signals. 2. User-oriented color models, which are utilized as a bridge between the human operators and the hardware used to manipulate the color information. Such models allow the user to specify color in terms of perceptual attributes and they can be considered an experimental approximation of the human perception of color. 3. Device-independent color models, which are used to specify color signals independently of the characteristics of a given device or application. Such models are of importance in applications, where color comparisons and transmission of visual information over networks connecting di erent hardware platforms are required. In 1931, the Commission Internationale de L'Eclairage (CIE) adopted standard color curves for a hypothetical standard observer. These color curves specify how a speci c spectral power distribution (SPD) of an external stimulus (visible radiant light incident on the eye) can be transformed into a set of three numbers that specify the color. The CIE color speci cation system

4

1. Color Spaces

is based on the description of color as the luminance component Y and two additional components X and Z [5]. The spectral weighting curves of X and Z have been standardized by the CIE based on statistics from experiments involving human observers [5]. The CIE XYZ tristimulus values can be used to describe any color. The corresponding color space is called the CIE XYZ color space. The XYZ model is a device independent color space that is useful in applications where consistent color representation across devices with di erent characteristics is important. Thus, it is exceptionally useful for color management purposes. The CIE XYZ space is perceptually highly non uniform [4]. Therefore, it is not appropriate for quantitative manipulations involving color perception and is seldom used in image processing applications [4], [10]. Traditionally, color images have been speci ed by the non-linear red (R0 ), green (G0 ) and blue (B 0 ) tristimulus values where color image storage, processing and analysis is done in this non-linear RGB (R0 G0 B0 ) color space. The red, green and blue components are called the primary colors . In general, hardware devices such as video cameras, color image scanners and computer monitors process the color information based on these primary colors. Other popular color spaces in image processing are the YIQ (North American TV standard), the HSI (Hue, Saturation and Intensity), and the HSV (Hue, Saturation, Value) color spaces used in computer graphics. Although XYZ is used only indirectly it has a signi cant role in image processing since other color spaces can be derived from it through mathematical transforms. For example, the linear RGB color space can be transformed to and from the CIE XYZ color space using a simple linear three-by-three matrix transform. Similarly, other color spaces, such as non-linear RGB, YIQ and HSI can be transformed to and from the CIE XYZ space, but might require complex and non-linear computations. The CIE have also derived and standardized two other color spaces, called L u v and L ab , from the CIE XYZ color space which are perceptually uniform [5]. The rest of this chapter is devoted to the analysis of the di erent color spaces in use today. The di erent color representation models are discussed and analyzed in detail with emphasis placed on motivation and design characteristics.

1.2 The CIE Chromaticity-based Models Over the years, the CIE committee has sponsored the research of color perception. This has lead to a class of widely used mathematical color models. The derivation of these models has been based on a number of color matching experiments, where an observer judges whether two parts of a visual stimulus match in appearance. Since the colorimetry experiments are based on a matching procedure in which the human observer judges the visual similarity of two areas the theoretical model predicts only matching and not perceived

1.2 The CIE Chromaticity-based Models

5

colors. Through these experiments it was found that light of almost any spectral composition can be matched by mixtures of only three primaries (lights of a single wavelength). The CIE had de ned a number of standard observer color matching functions by compiling experiments with di erent observers, di erent light sources and with various power and spectral compositions. Based on the experiments performed by CIE early in this century, it was determined that these three primary colors can be broadly chosen, provided that they are independent. The CIE's experimental matching laws allow for the representation of colors as vectors in a three-dimensional space de ned by the three primary colors. In this way, changes between color spaces can be accomplished easily. The next few paragraphs will brie y outline how such a task can be accomplished. According to experiments conducted by Thomas Young in the nineteenth century [19], and later validated by other researchers [20], there are three di erent types of cones in the human retina, each with di erent absorption spectra: S1 (), S2 (), S3 (), where 380780 (nm). These approximately peak in the yellow-green, green and blue regions of the electromagnetic spectrum with signi cant overlap between S1 and S2 . For each wavelength the absorption spectra provides the weight with which light of a given spectral distribution (SPD) contributes to the cone's output. Based on Young's theory, the color sensation that is produced by a light having SPD C () can be de ned as:

i (C ) =

Z

2

1

Si ()C () d

(1.1)

for i = 1; 2; 3. According to (1.1) any two colors C1 (), C2 () such that i (C1 ) = i (C2 ) , i = 1; 2; 3 will be perceived to be identical even if C1 () and C2 () are di erent. This well known phenomenon of spectrally di erent

stimuli that are indistinguishable to a human observer is called metamers [14] and constitutes a rather dramatic illustration of the perceptual nature of color and the limitations of the color modeling process. Assume that three primary colors Ck , k = 1; 2; 3 with SPD Ck () are available Z and let Ck () d = 1 (1.2) To match a color C with spectral energy distribution C (), the three primaries in proportions of k , k = 1; 2; 3. Their linear combination P3 areC mixed (  ) should be perceived as C () . Substituting this into (1.1) leads k=1 k k to:

Z X 3

i (C ) = ( for i = 1; 2; 3.

k=1

k Ck ())Si () d =

Z 3 X k=1

k Si ()Ck () d

(1.3)

6

1. Color Spaces

R

The quantity Si ()Ck () d can be interpreted as the ith , i = 1; 2; 3 cone response generated by one unit of the kth primary color:

Z

i;k = i (Ck ) = Si ()Ck () d

(1.4)

Therefore, the color matching equations are: 3 X

k=1

Z

k i;k = i (C ) = Si ()C () d

(1.5)

assuming a certain set of primary colors Ck () and spectral sensitivity curves Si (). For a given arbitrary color, k can be found by simply solving (1.4) and (1.5). Following the same approach wk can be de ned as the amount of the kth primary required to match the reference white, providing that there is available a reference white light source with known energy distribution w(). In such a case, the values obtained through

Tk (C ) = w k

k

(1.6)

for k = 1; 2; 3 are called tristimulus values of the color C , and determine the relative amounts of primitives required to match that color. The tristimulus values of any given color C () can be obtained given the spectral tristimulus values Tk (), which are de ned as the tristimulus values of unit energy spectral color at wavelength  . The spectral tristimulus Tk () provide the so-called spectral matching curves which are obtained by setting C () = ( ;  ) in (1.5). The spectral matching curves for a particular choice of color primaries with an approximately red, green and blue appearance were de ned in the CIE 1931 standard [9]. A set of pure monochromatic primaries are used, blue (435:8nm), green (546:1nm) and red (700nm). In Figures 1.2 and 1.3 the Yaxis indicates the relative amount of each primary needed to match a stimulus of the wavelength reported on the X-axis. It can be seen that some of the values are negative. Negative numbers require that the primary in question be added to the opposite side of the original stimulus. Since negative sources are not physically realizable it can be concluded that the arbitrary set of three primary sources cannot match all the visible colors. However, for any given color a suitable set of three primary colors can be found. Based on the assumption that the human visual system behaves linearly, the CIE had de ned spectral matching curves in terms of virtual primaries. This constitutes a linear transformation such that the spectral matching curves are all positive and thus immediately applicable for a range of practical situations. The end results are referred to as the CIE 1931 standard observer matching curves and the individual curves (functions) are labeled

1.2 The CIE Chromaticity-based Models

7

x, y, z respectively. In the CIE 1931 standard the matching curves were selected so that y was proportional to the human luminosity function, which was an experimentally determined measure of the perceived brightness of monochromatic light. CIE 1964 XYZ color matching functions 2.5

2

1.5

1

0.5

0 0

50

100

150

200 250 300 Wavelength, nm

350

400

450

500

Fig. 1.2. The CIE XYZ

400

450

500

Fig. 1.3. The CIE RGB

color matching functions

Color Matching Functions −:r −−: g

Tristimulus value

−.: b

1

0 0

50

100

150

200 250 300 Wavelength, nm

350

color matching functions

If the spectral energy distribution C () of a stimulus is given, then the chromaticity coordinates can be determined in two stages. First, the tristimulus values X , Y , Z are calculated as follows:

8

1. Color Spaces

Z

X = x()C () d

(1.7)

Y = y()C () d

(1.8)

Z = z()C () d

(1.9)

Z

Z

The new set of primaries must satisfy the following conditions: 1. The XYZ components for all visible colors should be non-negative. 2. Two of the primaries should have zero luminance. 3. As many spectral colors as possible should have at least one zero XYZ component. Secondly, normalized tristimulus values, called chromaticity coordinates, are calculated based on the primaries as follows:

x = X + XY + Z

(1.10)

y = X + YY + Z (1.11) z = X + ZY + Z (1.12) Clearly z = 1 ; (x + y) and hence only two coordinates are necessary to

describe a color match. Therefore, the chromaticity coordinates project the 3 ; D color solid on a plane, and they are usually plotted as a parametric x ; y plot with z implicitly evaluated as z = 1 ; (x + y). This diagram is known as the chromaticity diagram and has a number of interesting properties that are used extensively in image processing. In particular, 1. The chromaticity coordinates (x; y) jointly represent the chrominance components of a given color. 2. The entire color space can be represented by the coordinates (x; y; T ), in which T = constant is a given chrominance plane. 3. The chromaticity diagram represents every physically realizable color as a point within a well de ned boundary. The boundary represents the primary sources. The boundary vertices have coordinates de ned by the chromaticities of the primaries. 4. A white point is located in the center of the chromaticity diagram. More saturated colors radiate outwards from white. Complementary pure colors can easily be determined from the diagram. 5. In the chromaticity diagram, the color perception obtained through the superposition of light coming from two di erent sources, lies on a straight line between the points representing the component lights in the diagram.

1.3 The CIE-RGB Color Model

9

6. Since the chromaticity diagram reveals the range of all colors which can be produced by means of the three primaries (gamut), it can be used to guide the selection of primaries subject to design constraints and technical speci cations. 7. The chromaticity diagram can be utilized to determine the hue and saturation of a given color since it represents chrominance by eliminating luminance. Based on the initial objectives set out by CIE, two of the primaries, X and Z , have zero luminance while the primary Y is the luminance indicator determined by the light-eciency function V () at the spectral matching curve y. Thus, in the chromaticity diagram the dominant wavelength (hue) can be de ned as the intersection between a line drawn from the reference white through the given color to the boundaries of the diagram. Once the hue has been determined, then the purity wc of the line segments of a given color can be found as the ratio r = wp that connect the reference white with the color (wc) to the line segment between the reference white and the dominant wavelength/hue (wp).

Fig. 1.4. The chromaticity diagram

1.3 The CIE-RGB Color Model The fundamental assumption behind modern colorimetry theory, as it applies to image processing tasks, is that the initial basis for color vision lies in the di erent excitation of three classes of photo-receptor cones in the retina. These include the red, green and blue receptors, which de ne a trichromatic

10

1. Color Spaces

space whose basis of primaries are pure colors in the short, medium and high portions of the visible spectrum [4], [5], [10]. As a result of the assumed linear nature of light, and due to the principle of superposition, the colors of a mixture are a function of the primaries and the fraction of each primary that is mixed. Throughout this analysis, the primaries need not be known, just their tristimulus values. This principle is called additive reproduction. It is employed in image and video devices used today where the color spectra from red, green and blue light beams are physically summed at the surface of the projection screen. Direct view color CRT's (cathode ray tube) also utilize additive reproduction. In particular, the CRT's screen consists of small dots which produce red, green and blue light. When the screen is viewed from a distance the spectra of these dots add up in the retina of the observer. In practice, it is possible to reproduce a large number of colors by additive reproduction using the three primaries: red, green and blue. The colors that result from additive reproduction are completely determined by the three primaries. The video projectors and the color CRT's in use today utilize a color space collectively known under the name RGB, which is based on the red, green and blue primaries and a white reference point. To uniquely specify a color space based on the three primary colors the chromaticity values of each primary color and a white reference point need to be speci ed. The gamut of colors which can be mixed from the set of the RGB primaries is given in the (x; y) chromaticity diagram by a triangle whose vertices are the chromaticities of the primaries (Maxwell triangle) [5], [20]. This is shown in Figure 1.5. P 3

1

C

C’

1 P

1

P 1

2

Fig. 1.5. The Maxwell triangle

1.3 The CIE-RGB Color Model

Blue(0,0,B)

Cyan(0,G,B)

White (R,G,B)

Magenta (R,0,B)

11

Grey-scale line

Green(0,G,0) Black(0,0,0)

Red(R,0,0)

Yellow(R,G,0)

Fig. 1.6. The RGB color model

In the red, green and blue system the color solid generated is a bounded subset of the space generated by each primary. Using an appropriate scale along each primary axis, the space can normalized, so that the maximum is 1. Therefore, as can be seen in Figure 1.6 the RGB color solid is a cube, called the RGB cube. The origin of the cube, de ned as (0; 0; 0) corresponds to black and the point with coordinates (1; 1; 1) corresponds to the system's brightest white. In image processing, computer graphics and multimedia systems the RGB representation is the most often used. A digital color image is represented by a two dimensional array of three variate vectors which are comprised of the pixel's red, green and blue values. However, these pixel values are relative to the three primary colors which form the color space. As it was mentioned earlier, to uniquely de ne a color space, the chromaticities of the three primary colors and the reference white must be speci ed. If these are not speci ed within the chromaticity diagram, the pixel values which are used in the digital representation of the color image are meaningless [16]. In practice, although a number of RGB space variants have been de ned and are in use today, their exact speci cations are usually not available to the end-user. Multimedia users assume that all digital images are represented in the same RGB space and thus use, compare or manipulate them directly no matter where these images are from. If a color digital image is represented in the RGB system and no information about its chromaticity characteristics is available, the user cannot accurately reproduce or manipulate the image. Although in computing and multimedia systems there are no standard primaries or white point chromaticities, a number of color space standards

12

1. Color Spaces

have been de ned and used in the television industry. Among them are the Federal Communication Commission of America (FCC) 1953 primaries, the Society of Motion Picture and Television Engineers (SMPTE) `C' primaries, the European Broadcasting Union (EBU) primaries and the ITU-R BT.709 standard (formerly known as CCIR Rec. 709) [24]. Most of these standards use a white reference point known as CIE D65 but other reference points, such as the CIE illuminant E are also be used [4]. In additive color mixtures the white point is de ned as the one with equal red, green and blue components. However, there is no unique physical or perceptual de nition of white, so the characteristics of the white reference point should be de ned prior to its utilization in the color space de nition. In the CIE illuminant E, or equal-energy illuminant, white is de ned as the point whose spectral power distribution is uniform throughout the visible spectrum. A more realistic reference white, which approximates daylight has been speci ed numerically by the CIE as illuminant D65. The D65 reference white is the one most often used for color interchange and the reference point used throughout this work. The appropriate red, green and blue chromaticities are determined by the technology employed, such as the sensors in the cameras, the phosphors within the CTR's and the illuminants used. The standards are an attempt to quantify the industry's practice. For example, in the FCC-NTSC standard, the set of primaries and speci ed white reference point were representative of the phosphors used in color CRTs of a certain era. Although the sensor technology has changed over the years in response to market demands for brighter television receivers, the standards remain the same. To alleviate this problem, the European Broadcasting Union (EBU) has established a new standard (EBU Tech 3213). It is de ned in Table 1.1.

Table 1.1. EBU Tech 3213 Primaries Colorimetry x y z

Red 0.640 0.330 0.030

Green 0.290 0.600 0.110

Blue 0.150 0.060 0.790

White D65 0.3127 0.3290 0.3582

Recently, an international agreement has nally been reached on the primaries for the High De nition Television (HDTV) speci cation. These primaries are representative of contemporary monitors in computing, computer graphics and studio video production. The standard is known as ITU-R BT.709 and its primaries along with the D65 reference white are de ned in Table 1.2. The di erent RGB systems can be converted amongst each other using a linear transformation assuming that the white references values being used are known. As an example, if it is assumed that the D65 is used in both

1.4 Gamma Correction

13

Table 1.2. EBU Tech 3213 Primaries Colorimetry x y z

Red 0.640 0.330 0.030

Green 0.300 0.600 0.100

Blue 0.150 0.060 0.790

White D65 0.3127 0.3290 0.3582

systems, then the conversion between the ITU-R BT.709 and SMPTE `C' primaries is de ned by the following matrix transformation: 2 R 3 2 0:939555 0:050173 0:010272 3 2 R 3 c 4 G709 (1.13) 709 5 = 4 0:017775 0:9655795 0:016430 5 4 Gc 5 B709 ;0:001622 ;0:004371 1:005993 Bc where R709 , G709 , B709 are the linear red, green and blue components of the ITU-R BT.709 and Rc , Gc , Bc are the linear components in the SMPTE `C' system. The conversion should be carried out in the linear voltage domain, where the pixel values must rst be converted into linear voltages. This is achieved by applying the so-called gamma correction.

1.4 Gamma Correction In image processing, computer graphics, digital video and photography, the symbol represents a numerical parameter which describes the nonlinearity of the intensity reproduction. The cathode-ray tube (CRT) employed in modern computing systems is nonlinear in the sense that the intensity of light reproduced at the screen of a CRT monitor is a nonlinear function of the voltage input. A CRT has a power law response to applied voltage. The light intensity produced on the display is proportional to the applied voltage raised to a power denoted by [4], [16], [17]. Thus, the produced intensity by the CRT and the voltage applied on the CRT have the following relationship: Iint = (v0 ) (1.14) The relationship which is called the ` ve-halves' power law is dictated by the physics of the CRT electron gun. The above function applies to a single electron gun of a gray-scale CRT or each of the three red, green and blue electron guns of a color CRT. The functions associated with the three guns on a color CRT are very similar to each other but not necessarily identical. The actual value of for a particular CRT may range from about 2.3 to 2.6 although most practitioners frequently claim values lower than 2.2 for video monitors. The process of pre-computing for the nonlinearity by computing a voltage signal from an intensity value is called gamma correction. The function

14

1. Color Spaces

required is approximately a 0:45 power function. In image processing applications, gamma correction is accomplished by analog circuits at the camera. In computer graphics, gamma correction is usually accomplished by incorporating the function into a frame bu er lookup table. Although in image processing systems gamma was originally used to refer to the nonlinearity of the CRT, it is generalized to refer to the nonlinearity of an entire image processing system. The value of an image or an image processing system can be calculated by multiplying the 's of its individual components from the image capture stage to the display. The model used in (1.14) can cause wide variability in the value of gamma mainly due to the black level errors since it forces the zero voltage to map to zero intensity for any value of gamma. A slightly di erent model can be used in order to resolve the black level error. The modi ed model is given as: Iint = (voltage + )2:5 (1.15) By xing the exponent of the power function at 2.5 and using the single parameter to accommodate black level errors the modi ed model ts the observed nonlinearity much better than the variable gamma model in (1.14). The voltage-to-intensity function de ned in (1.15) is nearly the inverse of the luminance-to-brightness relationship of human vision. Human vision de nes luminance as a weighted mixture of the spectral energy where the weights are determined by the characteristics of the human retina. The CIE has standardized a weighting function which relates spectral power to luminance. In this standardized function, the perceived luminance by humans relates to the physical luminance (proportional to intensity) by the following equation:

(

Y 3 L = 116( Yn )

; 16 if YYn > 0:008856 903:3( YYn ) if YYn 0:008856 1

1 3

(1.16)

where Yn is the luminance of the reference white, usually normalized either to 1.0 or 100. Thus, the lightness perceived by humans is, approximately, the cubic root of the luminance. The lightness sensation can be computed as intensity raised, approximately to the third power. Thus, the entire image processing system can be considered linear or almost linear. To compensate for the nonlinearity of the display (CRT), gamma correction with a power of ( 1 ) can be used so that the overall system is approximately 1. In a video system, the gamma correction is applied to the camera for precomputing the nonlinearity of the display. The gamma correction performs the following transfer function:

voltage0 = (voltage) 1

(1.17)

1.4 Gamma Correction

15

where voltage is the voltage generated by the camera sensors. The gamma corrected value is the reciprocal of the gamma resulting in a transfer function with unit power exponent. To achieve subjectively pleasing images, the end-to-end power function of the overall imaging system should be around 1.1 or 1.2 instead of the mathematically correct linear system. The REC 709 speci es a power exponent of 0.45 at the camera which, in conjunction with the 2.5 exponent at the display, results in an overall exponent value of about 1.13. If the value is greater than 1, the image appears sharper but the scene contrast range, which can be reproduced, is reduced. On the other hand, reducing the value has a tendency to make the image appear soft and washed out. For color images, the linear values R; G, and B values should be converted into nonlinear voltages R0 , G0 and B 0 through the application of the gamma correction process. The color CRT will then convert R0 , G0 and B 0 into linear red, green and blue light to reproduce the original color. The ITU-R BT.709 standard recommends a gamma exponent value of 0.45 for the High De nition Television. In practical systems, such as TV cameras, certain modi cations are required to ensure proper operation near the dark regions of an image, where the slope of a pure power function is in nite at zero. The red tristimulus (linear light) component may be gamma-corrected at the camera by applying the following convention:  if R0:018 0 = 4:5R 0:45 R709 (1.18) 1:099R ; 0:099 if 0:018 < R 0 the resulting gamma corrected with R denoting the linear light and R709 value. The computations are identical for the G and B components. The linear R; G, and B are normally in the range [0; 1] when color images are used in digital form. The software library translates these oating point values to 8-bit integers in the range of 0 to 255 for use by the graphics hardware. Thus, the gamma corrected value should be:

R0 = 255R

(1.19) The constant 255 in (1.19) is added during the A/D process. However, gamma correction is usually performed in cameras, and thus, pixel values are in most cases nonlinear voltages. Thus, intensity values stored in the framebu er of the computing device are gamma corrected on-the- y by hardware look up tables on their way to the computer monitor display. Modern image processing systems utilize a wide variety of sources of color images, such as images captured by digital cameras, scanned images, digitized video frames and computer generated images. Digitized video frames usually have a gamma correction value between 0.5 and 0.45. Digital scanners assume an output gamma in the range of 1.4 to 2.2 and they perform their gamma correction accordingly. For computer generated images the gamma correction value is 1

16

1. Color Spaces

usually unknown. In the absence of the actual gamma value the recommended gamma correction is 0.45. In summary, pixel values alone cannot specify the actual color. The gamma correction value used for capturing or generating the color image is needed. Thus, two images which have been captured with two cameras operating under di erent gamma correction values will represent colors differently even if the same primaries and the same white reference point are used.

1.5 Linear and Non-linear RGB Color Spaces The image processing literature rarely discriminates between linear RGB and non-linear (R0 G0 B0 ) gamma corrected values. For example, in the JPEG and MPEG standards and in image ltering, non-linear RGB (R0 G0 B0 ) color values are implicit. Unacceptable results are obtained when JPEG or MPEG schemes are applied to linear RGB image data [4]. On the other hand, in computer graphics, linear RGB values are implicitly used [4]. Therefore, it is very important to understand the di erence between linear and non-linear RGB values and be aware of which values are used in an image processing application. Hereafter, the notation R0 G0 B0 will be used for non-linear RGB values so that they can be clearly distinguished from the linear RGB values.

1.5.1 Linear RGB Color Space As mentioned earlier, intensity is a measure, over some interval of the electromagnetic spectrum, of the ow of power that is radiated from an object. Intensity is often called a linear light measure. The linear R value is proportional to the intensity of the physical power that is radiated from an object around the 700 nm band of the visible spectrum. Similarly, a linear G value corresponds to the 546:1 nm band and a linear B value corresponds to the 435:8 nm band. As a result the linear RGB space is device independent and used in some color management systems to achieve color consistency across diverse devices. The linear RGB values in the range [0, 1] can be converted to the corresponding CIE XYZ values in the range [0, 1] using the following matrix transformation [4]: 2 X 3 2 0:4125 0:3576 0:1804 3 2 R 3 4 Y 5 = 4 0:2127 0:7152 0:0722 5 4 G 5 (1.20) Z 0:0193 0:1192 0:9502 B The transformation from CIE XYZ values in the range [0, 1] to RGB values in the range [0, 1] is de ned by:

1.5 Linear and Non-linear RGB Color Spaces

2 R 3 2 3:2405 ;1:5372 ;0:4985 3 2 X 3 4 G 5 = 4 ;0:9693 1:8760 0:0416 5 4 Y 5

17

(1.21) 0:0556 ;0:2040 1:0573 Z Alternatively, tristimulus XYZ values can be obtained from the linear RGB values through the following matrix [5]: 2 X 3 2 0:490 0:310 0:200 3 2 R 3 4 Y 5 = 4 0:117 0:812 0:011 5 4 G 5 (1.22) Z 0:000 0:010 0:990 B The linear RGB values are a physical representation of the chromatic light radiated from an object. However, the perceptual response of the human visual system to radiate red, green, and blue intensities is non-linear and more complex. The linear RGB space is, perceptually, highly non-uniform and not suitable for numerical analysis of the perceptual attributes. Thus, the linear RGB values are very rarely used to represent an image. On the contrary, non-linear R0 G0 B0 values are traditionally used in image processing applications such as ltering.

B

1.5.2 Non-linear RGB Color Space When an image acquisition system, e.g. a video camera, is used to capture the image of an object, the camera is exposed to the linear light radiated from the object. The linear RGB intensities incident on the camera are transformed to non-linear RGB signals using gamma correction. The transformation to non-linear R0 G0 B0 values in the range [0, 1] from linear RGB values in the range [0, 1] is de ned by: ( 4:5R; if R  0:018 R0 = 1:099R C ; 0:099; otherwise ( 4:5G; if G  0:018 0 G = 1:099G C ; 0:099; otherwise (1.23) ( 4:5B; if B  0:018 0 B = 1:099B C ; 0:099; otherwise where C is known as the gamma factor of the camera or the acquisition device. The value of C that is commonly used in video cameras is 0:145 (' 2:22) [4]. The above transformation is graphically depicted in Figure 1.7. The linear segment near low intensities minimizes the e ect of sensor noise in practical cameras and scanners. Thus, the digital values of the image pixels acquired from the object and stored within a camera or a scanner are the R0 G0 B0 values usually converted to the range of 0 to 255. Three bytes are then required to represent the three components, R0 , G0 , and B 0 of a color image pixel with one byte for each 1

1

1

18

1. Color Spaces 1

0.8

NMSE x 100

0.6

0.4

0.2

0

−0.2 0

Fig. 1.7. Linear to Non0.1

0.2

0.3

0.4

0.5 0.6 BVOSF #

0.7

0.8

0.9

1

linear Light Transformation

component. It is these non-linear R0 G0 B0 values that are stored as image data les in computers and are used in image processing applications. The RGB symbol used in image processing literature usually refers to the R0 G0 B0 values and, therefore, care must be taken in color space conversions and other relevant calculations. Suppose the acquired image of an object needs to be displayed in a display device such as a computer monitor. Ideally, a user would like to see (perceive) the exact reproduction of the object. As pointed out, the image data is in R0 G0 B0 values. Signals (usually voltage) proportional to the R0 G0 B0 values will be applied to the red, green, and blue guns of the CRT (Cathode Ray Tube) respectively. The intensity of the red, green, and blue lights generated by the CRT is a non-linear function of the applied signal. The non-linearity of the CRT is a function of the electrostatics of the cathode and the grid of the electron gun. In order to achieve correct reproduction of intensities, an ideal monitor should invert the transformation at the acquisition device (camera) so that the intensities generated are identical to the linear RGB intensities that were radiated from the object and incident in the acquisition device. Only then will the perception of the displayed image be identical to the perceived object. A conventional CRT has a power-law response, as depicted in Figure 1.8. This power-law response, which inverts the non-linear (R0 G0 B0 ) values in the range [0, 1] back to linear RGB values in the range [0, 1], is de ned by the following power function [4]: ( R0 ; if R0  0:018 R = 4:R5 0 + 0:099  D otherwise ( G0 ; 1:099 if G0  0:018 (1.24) G = 4:G5 0 + 0:099  D otherwise 1:099

1.5 Linear and Non-linear RGB Color Spaces

B =

19

( B0 ; 4:B5 0 + 0:099  D if B0  0:018

otherwise The value of the power function, D , is known as the gamma factor of the display device or CRT. Normal display devices have D in the range of 2:2 to 2:45. For exact reproduction of the intensities, gamma factor of the display device must be equal to the gamma factor of the acquisition device ( C = D ). Therefore, a CRT with a gamma factor of 2:2 should correctly reproduce the intensities. 1:099

1 0.9

Linear Light Intensities (R, G, B)

0.8 0.7 0.6 0.5 0.4 0.3 0.2

Fig. 1.8. Non-linear to

0.1 0 0

0.1

0.2

0.3 0.4 0.5 0.6 0.7 Non−linear Light Intensties (R’, G’, B’)

0.8

0.9

1

linear Light Transformation

The transformations that take place throughout the process of image acquisition to image display and perception are illustrated in Figure 1.9. R Object

G B

Digital Video Camera

R’ G’ B’

Storage

G’ B’

Perceived Intensities

R

R’ CRT

G

HVS

B

R’ G’ B’

Fig. 1.9. Transformation of Intensities from Image Capture to Image Display It is obvious from the above discussion that the R0 G0 B0 space is a device dependent space. Suppose a color image, represented in the R0 G0 B0 space, is displayed on two computer monitors having di erent gamma factors. The red, green, and blue intensities produced by the monitors will not be identical and the displayed images might have di erent appearances. Device dependent spaces cannot be used if color consistency across various devices, such as display devices, printers, etc., is of primary concern. However, similar devices

20

1. Color Spaces

(e.g. two computer monitors) usually have similar gamma factors and in such cases device dependency might not be an important issue. As mentioned before, the human visual system has a non-linear perceptual response to intensity, which is roughly logarithmic and is, approximately, the inverse of a conventional CRT's non-linearity [4]. In other words, the perceived red, green, and blue intensities are approximately related to the R0 G0 B0 values. Due to this fact, computations involving R0 G0 B0 values have an approximate relation to the human color perception and the R0 G0 B0 space is less perceptually non-uniform relative to the CIE XYZ and linear RGB spaces [4]. Hence, distance measures de ned between the R0 G0 B0 values of two color vectors provide a computationally simple estimation of the error between them. This is very useful for real-time applications and systems in which computational resources are at premium. However, the R0 G0 B0 space is not adequately uniform, and it cannot be used for accurate perceptual computations. In such instances, perceptually uniform color spaces (e.g. L u v and L ab ) that are derived based on the attributes of human color perception are more desirable than the R0 G0 B0 space [4].

1.6 Color Spaces Linearly Related to the RGB In transmitting color images through a computer-centric network, all three primaries should be transmitted. Thus, storage or transmission of a color image using RGB components requires a channel capacity three times that of gray scale images. To reduce these requirements and to boost bandwidth utilization, the properties of the human visual system must be taken into consideration. There is strong evidence that the human visual system forms an achromatic channel and two chromatic color-di erence channels in the retina. Consequently, a color image can be represented as a wide band component corresponding to brightness, and two narrow band color components with considerably less data rate than that allocated to brightness. Since the large percentage (around 60%) of brightness is attributed to the green primary, then it is advantageous to base the color components on the other two primaries. The simplest way to form the two color components is to remove them by subtraction, (e.g. the brightness from the blue and red primaries). In this way the unit RGB color cube is transformed into the luminance Y and two color di erence components B ; Y and R ; Y [33], [34]. Once these color di erence components have been formed, they can be sub-sampled to reduce the bandwidth or data capacity without any visible degradation in performance. The color di erence components are calculated from nonlinear gamma corrected values R0 ,G0 ,B 0 rather than the tristimulus (linear voltage) R; G; B primary components. According to the CIE standards the color imaging system should operate similarly to a gray scale system, with a CIE luminance component Y formed

1.6 Color Spaces Linearly Related to the RGB

21

as a weighted sum of RGB tristimulus values. The coecients in the weighted sum correspond to the sensitivity of the human visual system to each of the RGB primaries. The coecients are also a function of the chromaticity of the white reference point used. International agreement on the REC. 709 standard provides a value for the luminance component based on the REC. 0 luminance equation is: 709 primaries [24]. Thus, the Y709 0 = 0:2125R0 + 0:7154G0 + 0:0721B 0 Y709 (1.25) 709 709 709 0 , B709 0 and G0709 are the gamma-corrected (nonlinear) values of where R709 0 ; Y709 0 and the three primaries. The two color di erence components B709 0 0 R709 ; Y709 can be formed on the basis of the above equation. Various scale factors are applied to the basic color di erence components for di erent applications. For example, the Y 0 PR PB is used for component analog video, such as BetaCam, and Y 0 CB CR for component digital video, such as studio video, JPEG and MPEG. Kodak's YCC (PhotoCD model) uses scale factors optimized for the gamut of lm colors [31]. All these systems 0 ; B709 0 ; Y709 0 ; R709 0 ; Y709 0 ) which are scaled utilize di erent versions of the (Y709 to place the extrema of the component signals at more convenient values. In particular, the Y 0 PR PB system used in component analog equipment is de ned by the following set: 2 Y 0 3 2 0:299 0:587 0:114 3 2 R0 3 4 P601 0:5 5 4 G0 5 (1.26) B 5 = 4 ;0:168736 ;0:331264 PR 0:5 ;0:418686 ;0:081312 B 0 and 2 R 0 3 2 1: 32 0 3 0: 1:402 Y601 4 G0 5 = 4 1: ;0:344136 ;0:714136 5 4 PB 5 (1.27) B0 1: 1:772 0: PR The rst row comprises the luminance coecients which sum to unity. For each of the other two columns the coecients sum to zero, a necessity for color di erence formulas. The 0.5 weights re ect the maximum excursion of PB and PR for the blue and the red primaries. The Y0 CB CR is the Rec ITU-R BT. 601-4 international standard for studio quality component digital video. The luminance signal is coded in 8 bits. The Y 0 has an excursion of 219 with an o set of 16, with the black point coded at 16 and the white at code 235. Color di erences are also coded in 8-bit forms with excursions of 112 and o set of 128 for a range of 16 through 240 inclusive. To compute Y0 CB CR from nonlinear R0 G0 B0 in the range of [0; 1] the following set should be used: 2 Y 0 3 2 16 3 2 65:481 128:553 24:966 3 2 R0 3 4 C601B 5 = 4 128 5 + 4 ;37:797 ;74:203 112:0 5 4 G0 5 (1.28) CR 128 112:0 ;93:786 ;18:214 B 0

22

1. Color Spaces

with the inverse transform 2 R0 3 2 0:00456821 3 0:0 0:00625893 4 G0 5 = 4 0:00456621 ;0:00153632 ;0:00318811 5  B0 0:00456621 0:00791071 0:0

02 Y 0 3 2 16 31 @4 P601B 5 ; 4 128 5A

(1.29) 128 When 8-bit R0 G0 B0 are used, black is coded at 0 and white is at 255. To encode Y0 CB CR from R0 G0 B0 in the range of [0; 255] using 8-bit binary arithmetic the transformation matrix should be scaled by 256 255 . The resulting transformation pair is as follows: 2 Y 0 3 2 16 3 2 65:481 128:553 24:966 3 2 R0 3 601 0255 5 (1.30) 4 PB 5 = 4 128 5 + 1 4 ;37:797 ;74:203 112:0 5 4 G255 256 PR 128 112:0 ;93:786 ;18:214 B 0

PR

255

0 is the gamma-corrected value, using a gamma-correction lookup where R255

table for 1 . This yields the RGB intensity values with integer components between 0 and 255 which are gamma-corrected by the hardware. To obtain R0 G0 B0 values in the range [0; 255] from Y0 CB CR using 8-bit arithmetic the following transformation should be used: 2 R0 3 2 0:00456821 3 0:0 0:00625893 1 4 G0 5 = 4 0:00456621 ;0:00153632 ;0:00318811 5 256 0:00456621 0:00791071 B0 0:0

02 Y 0 3 2 16 31 @4 P601B 5 ; 4 128 5A PR

128

(1.31)

1 may be larger than unity and, Some of the coecients when scaled by 256 thus some clipping may be required so that they fall within the acceptable RGB range. The Kodak YCC color space is another example of a predistorted color space, which has been designed for the storage of still color images on the Photo-CD. It is derived from the predistorted (gamma-corrected) R0 G0 B0 values using the ITU-R BT.709 recommended white reference point, primaries, and gamma correction values. The YCC space is similar to the Y0 CB CR discussed, although scaling of B 0 ; Y 0 and R0 ; Y 0 is asymmetrical in order to accommodate a wide color gamut, similar to that of a photographic lm. In particular the following relationship holds for Photo-CD compressed formats: 0 Y 0 = 1255 (1.32) :402 Y601 C1 = 156 + 111:40(B 0 ; Y 0 ) (1.33)

1.7 The YIQ Color Space

23

C2 = 137 + 135:64(R0 ; Y 0 )

(1.34) The two chrominance components are compressed by factors of 2 both horizontally and vertically. To reproduce predistorted R0 G0 B0 values in the range of [0; 1] from integer PhotoYCC components the following transform is applied: 2 R0 3 2 0:00549804 3 0 : 0 0 : 0051681 4 G0 5 = 1 4 0:00549804 ;0:0015446 ;0:0026325 5 256 0:00549804 0:0079533 B0 0:0

02 Y 0 3 2 0 31 @4 C1 5 ; 4 156 5A

(1.35)

C2 137 The B 0 ; Y 0 and R0 ; Y 0 components can be converted into polar coordinates to represent the perceptual attributes of hue and saturation. The values can be computed using the following formulas [34]: 0;Y0 H = tan;1 ( B 0 R ;Y0)

S = ((B 0 ; Y 0 )2 + (R0 ; Y 0 )2 ) 21

(1.36) (1.37)

where the saturation S is the length of the vector from the origin of the chromatic plane to the speci c color and the hue H is the angle between the R0 ; Y 0 axis and the saturation component [33].

1.7 The YIQ Color Space The YIQ color speci cation system, used in commercial color TV broadcasting and video systems, is based upon the color television standard that was adopted in the 1950s by the National Television Standard committee (NTSC) [10], [1], [27], [28]. Basically, YIQ is a recoding of non-linear R0 G0 B0 for transmission eciency and for maintaining compatibility with monochrome TV standards [1], [4]. In fact, the Y component of the YIQ system provides all the video information required by a monochrome television system. The YIQ model was designed to take advantage of the human visual system's greater sensitivity to change in luminance than to changes in hue or saturation [1]. Due to these characteristics of the human visual system, it is useful in a video system to specify a color with a component representative of luminance Y and two other components: the in-phase I , an orange-cyan axis, and the quadrature Q component, the magenta-green axis. The two chrominance components are used to jointly represent hue and saturation .

24

1. Color Spaces

With this model, it is possible to convey the component representative of luminance Y in such a way that noise (or quantization) introduced in transmission, processing and storage is minimal and has a perceptually similar e ect across the entire tone scale from black to white [4]. This is done by allowing more bandwidth (bits) to code the luminance (Y ) and less bandwidth (bits) to code the chrominance (I and Q) for ecient transmission and storage purposes without introducing large perceptual errors due to quantization [1]. Another implication is that the luminance (Y ) component of an image can be processed without a ecting its chrominance (color content). For instance, histogram equalization to a color image represented in YIQ format can be done simply by applying histogram equalization to its Y component [1]. The relative colors in the image are not a ected by this process. The ideal way to accomplish these goals would be to form a luminance component (Y ) by applying a matrix transform to the linear RGB components and then subjecting the luminance (Y ) to a non-linear transfer function to achieve a component similar to lightness L. However, there are practical reasons in a video system why these operations are performed in the opposite order [4]. First, gamma correction is applied to each of the linear RGB. Then, a weighted sum of the nonlinear components is computed to form a component representative of luminance Y . The resulting component (luma) is related to luminance but is not the same as the CIE luminance Y although the same symbol is used for both of them. The nonlinear RGB to YIQ conversion is de ned by the following matrix transformation [4], [1]: 2 Y 3 2 0:299 0:587 0:114 3 2 R0 3 4 I 5 = 4 0:596 ;0:275 ;0:321 5 4 G0 5 (1.38) Q 0:212 ;0:523 0:311 B 0 As can be seen from the above transformation, the blue component has a small contribution to the brightness sensation (luma Y ) despite the fact that human vision has extraordinarily good color discrimination capability in the blue color [4]. The inverse matrix transformation is performed to convert YIQ to nonlinear R0 G0 B0 . Introducing a cylindrical coordinate transformation, numerical values for hue and saturation can be calculated as follows: HY IQ = tan;1 ( QI ) (1.39)

SY IQ = (I 2 + Q2 )

1 2

(1.40) As described it, the YIQ model is developed from a perceptual point of view and provides several advantages in image coding and communications applications by decoupling the luma (Y ) and chrominance components (I and Q). Nevertheless, YIQ is a perceptually non-uniform color space and thus not appropriate for perceptual color di erence quanti cation. For example,

1.8 The HSI Family of Color Models

25

the Euclidean distance is not capable of accurately measuring the perceptual color distance in the perceptually non-uniform YIQ color space. Therefore, YIQ is not the best color space for quantitative computations involving human color perception.

1.8 The HSI Family of Color Models In image processing systems, it is often convenient to specify colors in a way that is compatible with the hardware used. The di erent variants of the RGB monitor model address that need. Although these systems are computationally practical, they are not useful for user speci cation and recognition of colors. The user cannot easily specify a desired color in the RGB model. On the other hand, perceptual features, such as perceived luminance (intensity), saturation and hue correlate well with the human perception of color . Therefore, a color model in which these color attributes form the basis of the space is preferable from the users point of view. Models based on lightness, hue and saturation are considered to be better suited for human interaction. The analysis of the user-oriented color spaces starts by introducing the family of intensity, hue and saturation (HSI) models [28], [29]. This family of models is used primarily in computer graphics to specify colors using the artistic notion of tints, shades and tones. However, all the HSI models are derived from the RGB color space by coordinate transformations. In a computer centered image processing system, it is necessary to transform the color coordinates to RGB for display and vice versa for color manipulation within the selected space. The HSI family of color models use approximately cylindrical coordinates. The saturation (S ) is proportional to radial distance, and the hue (H ) is a function of the angle in the polar coordinate system. The intensity (I ) or lightness (L) is the distance along the axis perpendicular to the polar coordinate plane. The dominant factor in selecting a particular HSI model is the de nition of the lightness, which determines the constant-lightness surfaces, and thus, the shape of the color solid that represents the model. In the cylindrical models, the set of color pixels in the RGB cube which are assigned a common lightness value (L) form a constant-lightness surface. Any line parallel to the main diagonal of the color RGB cube meets the constantlightness surface at most in one point. The HSI color space was developed to specify, numerically, the values of hue, saturation, and intensity of a color [4]. The HSI color model is depicted in Figure 1.10. The hue (H ) is measured by the angle around the vertical axis and has a range of values between 0 and 360 degrees beginning with red at 0 . It gives a measure of the spectral composition of a color. The saturation (S ) is a ratio that ranges from 0 (i.e. on the I axis), extending radially outwards to a maximum value of 1 on the surface of the cone. This component refers to the proportion of pure light of the dominant wavelength and indicates how

26

1. Color Spaces

far a color is from a gray of equal brightness. The intensity (I ) also ranges between 0 and 1 and is a measure of the relative brightness. At the top and bottom of the cone, where I = 0 and 1 respectively, H and S are unde ned and meaningless. At any point along the I axis the Saturation component is zero and the hue is unde ned. This singularity occurs whenever R = G = B . I=1

White

gray-scale

purely saturated Red

Yellow

P

H Magenta

S

Intensity

Green

Cyan

Blue

Black

I=0

Fig. 1.10. The HSI Color Space

The HSI color model owes its usefulness to two principal facts [1], [28]. First, like in the YIQ model, the intensity component I is decoupled from the chrominance information represented as hue H and saturation S . Second, the hue (H ) and saturation (S ) components are intimately related to the way in which humans perceive chrominance [1]. Hence, these features make the HSI an ideal color model for image processing applications where the chrominance is of importance rather than the overall color perception (which is determined by both luminance and chrominance). One example of the usefulness of the HSI model is in the design of imaging systems that automatically determine the ripeness of fruits and vegetables [1]. Another application is color image histogram equalization performed in the HSI space to avoid undesirable shifts in image hue [10]. The simplest way to choose constant-lightness surfaces is to de ne them as planes. A simpli ed 0de nition 0 0 of the perceived lightness in terms of the R,G,B values is L = R +G3 +B , where the normalization is used to control the range of lightness values. The di erent constant-lightness surfaces are perpendicular to the main diagonal of the RGB cube and parallel to each

1.8 The HSI Family of Color Models

27

other. The shape of a constant lightness surface is a triangle for 0L M3 and 2M 3 LM with L2[0; M ] and where M is a given lightness threshold. The theory underlying the derivation of conversion formulas between the RGB space and HSI space is described in detail in [1], [28]. The image processing literature on HSI does not clearly indicate whether the linear or the non-linear RGB is used in these conversions [4]. Thus the non-linear (R0 G0 B0 ), which is implicit in traditional image processing, shall be used. But this ambiguity must be noted. The conversion from R0 G0 B0 (range [0, 1]) to HSI (range [0, 1]) is highly nonlinear and considerably complicated: # " 1 [(R0 ; G0 ) + (R0 ; B 0 )] ; 1 2 (1.41) H = cos [(R0 ; G0 )2 + (R0 ; B 0 )(G0 ; B 0 )] S = 1 ; (R0 + G30 + B 0 ) [min(R0 ; G0 ; B 0 )] (1.42) I = 13 (R0 + G0 + B 0 ) (1.43) where H = 360o ; H , if (B 0 =I ) > (G0 =I ). Hue is normalized to the range [0, 1] by letting H = H=360o. Hue (H ) is not de ned when the saturation (S ) is zero. Similarly, saturation (S ) is unde ned if intensity (I ) is zero. To transform the HSI values (range [0, 1]) back to the R0 G0 B0 values (range [0, 1]), then the H values in [0, 1] range must rst be converted back to the un-normalized [0o , 360o] range by letting H = 360o(H ). For the R0 G0 (red and green) sector (0o < H  120o), the conversion is: B 0 = I (1 ; S ) (1.44)   H (1.45) R0 = I 1 + cos S(60cos o ; H) G0 = 3I ; (R0 + B 0 ) (1.46) The conversion for the G0 B 0 (green and blue) sector (120o < H  240o) is given by: H = H ; 120o (1.47) R0 = I (1 ; S ) (1.48)   H G0 = I 1 + cos S(60cos (1.49) o ; H) B 0 = 3I ; (R0 + G0 ) (1.50) Finally, for the B 0 R0 (blue and red) sector (240o < H  360o), the corresponding equations are: 1 2

28

1. Color Spaces

H = H ; 240o G0 = I (1 ; S )



H B 0 = I 1 + cos S(60cos o ; H) R0 = 3I ; (G0 + B 0 )



(1.51) (1.52) (1.53)

(1.54) Fast versions of the transformation, containing fewer multiplications and avoiding square roots, are often used in hue calculations. Also, formulas without trigonometric functions can be used. For example, hue can be evaluated using the following formula [44]: 1. If B 0 = min(R0 ; G0 ; B 0 ) then 0

0

B H = 3(R0G+ ; G0 ; 2B 0

(1.55)

2. If R0 = min(R0 ; G0 ; B 0 ) then 0 0 H = R0 +B G;0 ;R 2B 0 + 31 (1.56) 3. If G0 = min(R0 ; G0 ; B 0 ) then 0 0 (1.57) H = R0 +B G;0 ;R 2B 0 + 32 Although the HSI model is useful in some image processing applications, the formulation of it is awed with respect to the properties of color vision. The usual formulation makes no clear reference to the linearity or nonlinearity of the underlying RGB and to the lightness perception of human vision [4]. It computes the brightness as (R0 + G0 + B 0 ) =3 and assigns the name intensity I . Recall that the brightness perception is related to luminance Y . Thus, this computation con icts with the properties of color vision [4]. In addition to this, there is a discontinuity in the hue at 360o and thus, the formulation introduces visible discontinuities in the color space. Another major disadvantage of the HSI space is that it is not perceptually uniform. Consequently, the HSI model is not very useful for perceptual image computation and for conveyance of accurate color information. As such, distance measures, such as the Euclidean distance, cannot estimate adequately the perceptual color distance in this space. The model discussed above is not the only member of the family. In particular, the double hexcone HLS model can be de ned by simply modifying the constant-lightness surface. It is depicted in Figure 1.11. In the HLS model the lightness is de ned as:

1.8 The HSI Family of Color Models 0

0

0

0

0

0

min (R ; G ; B ) L = max (R ; G ; B ) + 2

29

(1.58)

If the maximum and the minimum value coincide then S = 0 and the hue is unde ned. Otherwise based on the lightness value, saturation is de ned as follows: Max;Min) 1. If L0:5 then S = ((Max +Min) ;Min) 2. If L > 0:5 then S = (2(;Max Max;Min) where Max = max (R0 ; G0 ; B 0 ) and Min = min (R0 ; G0 ; B 0 ) respectively. Similarly, hue is calculated according to: 1. If R0 = Max then

G0 ; B 0 H = Max ; Min

(1.59)

B 0 ; R0 H = Max ; Min

(1.60)

R0 ; G0 H = 4 + Max ; Min

(1.61)

2. If G0 = Max then

3. If B 0 = Max then

The backward transform starts by rescaling the hue angles into the range [0; 6]. Then, the following cases are considered: 1. If S = 0, hue is unde ned and (R0 ; G0 ; B 0 ) = (L; L; L) 2. Otherwise, i = Floor(H ) (the Floor(X ) function returns the largest integer which is not greater than X ), in which i is the sector number of the hue and f = H ; i is the hue value in each sector. The following cases are considered:  if LLcritical = 255 2 then Max = L(1 + S ) (1.62) Mid1 = L(2fS + 1 ; S ) (1.63) Mid2 = L(2(1 ; f )S + 1 ; S ) (1.64) Min = L(1 ; S ) (1.65)  if L > Lcritical = 255 2 then Max = L(1 ; S ) + 255S (1.66) Mid1 = 2((1 ; f )S ; (0:5 ; f )Max) (1.67) Mid2 = 2(fL ; (f ; 0:5)Max) (1.68) Min = L(1 + S ) ; 255S (1.69)

30

1. Color Spaces

Based on these intermediate values the following assignments should be made: 1. if i = 0 then (R0 ; G0 ; B 0 ) = (Max; Mid1; Min) 2. if i = 1 then (R0 ; G0 ; B 0 ) = (Mid2; Max; Min) 3. if i = 2 then (R0 ; G0 ; B 0 ) = (Min; Max; Mid1) 4. if i = 3 then (R0 ; G0 ; B 0 ) = (Min; Mid2; Max) 5. if i = 4 then (R0 ; G0 ; B 0 ) = (Mid1; Min; Max) 6. if i = 5 then (R0 ; G0 ; B 0 ) = (Max; Min; Mid2) White L=0

Green

Yellow S

P H

Cyan

Red Lightness (L)

Blue

Magenta

Black L=1

Fig. 1.11. The HLS Color Space The HSV (hue, saturation, value) color model also belongs to this group of hue-oriented color coordinate systems which correspond more closely to the human perception of color. This user-oriented color space is based on the

1.8 The HSI Family of Color Models

31

V Green

Yellow S

P H

Cyan

Blue

Red

White V=0

Magenta Value (V)

Black V=1

Fig. 1.12. The HSV Color Space intuitive appeal of the artist's tint, shade, and tone. The HSV coordinate system, proposed originally in Smith [36], is cylindrical and is conveniently represented by the hexcone model shown in Figure 1.12 [23], [27]. The set of equations below can be used to transform a point in the RGB coordinate system to the appropriate value in the HSV space. 1 [(R ; G) + (R ; B )] g (1.70) H1 = cos;1 f p 2 2 (R ; G) + (R ; B )(G ; B ) H = H1 ; if B  G (1.71)  H = 360 ; H1 ; if B > G (1.72)

G; B ) ; min(R; G; B ) S = max(R; max( R; G; B )

R; G; B ) V = max(255

(1.73) (1.74)

Here the RGB values are between 0 and 255. A fast algorithm used here to convert the set of RGB values to the HSV color space is provided in [23]. The important advantages of the HSI family of color spaces over other color spaces are:

32

1. Color Spaces

 Good compatibility with human intuition.  Separability of chromatic values from achromatic values.  The possibility of using one color feature, hue, only for segmentation pur-

poses. Many image segmentation approaches take advantage of this. Segmentation is usually performed in one color feature (hue) instead of three, allowing the use of much faster algorithms. However, hue-oriented color spaces have some signi cant drawbacks, such as:  singularities in the transform, e.g. unde ned hue for achromatic points  sensitivity to small deviations of RGB values near singular points  numerical instability when operating on hue due to the angular nature of the feature.

1.9 Perceptually Uniform Color Spaces Visual sensitivity to small di erences among colors is of paramount importance in color perception and speci cation experiments. A color system that is to be used for color speci cation should be able to represent any color with high precision. All systems currently available for such tasks are based on the CIE XYZ color model. In image processing, it is of particular interest in a perceptually uniform color space where a small perturbation in a component value is approximately equally perceptible across the range of that value. The color speci cation systems discussed until now, such as the XYZ or RGB tristimulus values and the various RGB hardware oriented systems are far from uniform. Recalling the discussion of YIQ space earlier in this chapter, the ideal way to compute the perceptual components representative of luminance and chrominance is to appropriately form the matrix of linear RGB components and then subject them to nonlinear transfer functions based on the color sensing properties of the human visual system. A similar procedure is used by CIE to formulate the L u v and L ab spaces. The linear RGB components are rst transformed to CIE XYZ components using the appropriate matrix. Finding a transformation of XYZ which transforms this color space into a reasonably perceptually uniform color space consumed a decade or more at the CIE and in the end, no single system could be agreed upon [4], [5]. Finally, in 1976, CIE standardized two spaces, L u v and L a b , as perceptually uniform. They are slightly di erent because of the di erent approaches to their formulation [4], [5], [25], [30]. Nevertheless, both spaces are equally good in perceptual uniformity and provide very good estimates of color di erence (distance) between two color vectors. Both systems are based on the perceived lightness L and a set of opponent color axes, approximately red-green versus yellow-blue. According to

1.9 Perceptually Uniform Color Spaces

33

the CIE 1976 standard, the perceived lightness of a standard observer is assumed to follow the physical luminance (a quantity proportional to intensity) according to a cubic root law. Therefore, the lightness L is de ned by the CIE as:

(

Y 3 L = 116( Yn )

; 16 if YYn > 0:008856 903:3( YYn ) if YYn 0:008856 1

1 3

(1.75)

where Yn is the physical luminance of the white reference point . The range of values for L is from 0 to 100 representing a black and a reference white respectively. A di erence of unity between two L values, the so-called L is the threshold of discrimination. This standard function relates perceived lightness to linear light luminance. Luminance can be computed as a weighted sum of red, green and blue components. If three sources appear red, green and blue and have the same power in the visible spectrum, the green will appear the brightest of the three because the luminous eciency function peaks in the green region of the spectrum. Thus, the coecients that correspond to contemporary CRT displays (ITU-R BT. 709 recommendation) [24] re ect that fact, when using the following equation for the calculation of the luminance: Y709 = 0:2125R + 0:7154G + 0:0721B (1.76) The u and v components in L u v space and the the a and b components in L ab space are representative of chrominance. In addition, both are device independent color spaces. Both these color spaces are, however, computationally intensive to transform to and from the linear as well as nonlinear RGB spaces. This is a disadvantage if real-time processing is required or if computational resources are at a premium.

1.9.1 The CIE L uv Color Space The rst uniform color space standardized by CIE is the L u v illustrated in Figure 1.13. It is derived based on the CIE XYZ space and white reference point [4], [5]. The white reference point [Xn ; Yn ; Zn ] is the linear RGB = [1; 1; 1] values converted to the XYZ values using the following transformation: 2 X 3 2 0:4125 0:3576 0:1804 3 2 1 3 4 Ynn 5 = 4 0:2127 0:7152 0:0722 5 4 1 5 (1.77) Zn 0:0193 0:1192 0:9502 1 Alternatively, white reference points can be de ned based on the Federal Communications Commission (FCC) or the European Broadcasting Union (EBU) RGB values using the following transformations respectively [35]:

34

1. Color Spaces

2 X 3 2 0:607 0:174 0:200 3 2 1 3 4 Ynn 5 = 4 0:299 0:587 0:114 5 4 1 5 Zn 0:000 0:066 1:116 1 2 X 3 2 0:430 0:342 0:178 3 2 1 3 4 Ynn 5 = 4 0:222 0:702 0:071 5 4 1 5 Zn

0:020 0:130 0:939

(1.78) (1.79)

1

Fig. 1.13. The L u v Color Space

The lightness component L is de ned by the CIE as a modi ed cube root of luminance Y [4], [31], [37], [32]:

L =

8   < 116 YYn ; : 903:3  YYn  1 3

16 if YYn > 0:008856 E otherwise

(1.80)

The CIE de nition of L applies a linear segment near black for (Y=Yn )  0:008856. This linear segment is unimportant for practical purposes [4]. L has a range [0, 100], and a L of unity is roughly the threshold of visibility [4]. Computation of u and v involves intermediate u0 , v0 , u0n, and vn0 quantities de ned as: u0 = X + 154XY + 3Z v0 = X + 159YY + 3Z (1.81)

u0n = X + 154XY n + 3Z vn0 = X + 159YYn + 3Z n n n n n n

(1.82)

with the CIE XYZ values computed through (1.20) and (1.21). Finally, u and v are computed as: u = 13L(u0 ; u0n ) (1.83)   0 0 v = 13L (v ; vn ) (1.84) Conversion from L u v to XYZ is accomplished by ignoring the linear segment of L. In particular, the linear segment can be ignored if the luminance variable Y is represented with eight bits of precision or less.

1.9 Perceptually Uniform Color Spaces

Then, the luminance Y is given by:   + 16 3 Y = L 116 Yn To compute X and Z , rst compute u0 and v0 as:

35

(1.85)

  u0 = 13uL + u0n v0 = 13vL + vn0 Finally, X and Z are given by: 0  :0 v0 ) Y + 15:0 u0 Y X = 14 u (9:0 ; v15 0  :0 v0 ) Y ; X  Z = 13 (9:0 ; 15 v0

(1.86) (1.87) (1.88)

Consider two color vectors xLu v and yL u v in the L u v space represented as: xLuv = [xL ; xu ; xv ]T and yLuv = [yL ; yu ; yv ]T (1.89) The perceptual color distance in the L u v space, called the total color  in [5], is de ned as the Euclidean distance (L2 norm) between di erence Euv the two color vectors xL u v and yL u v :  = jjxL u v ; yL u v jjL Euv 2

h

 = (xL ; yL )2 + (xu ; yu )2 + (xv ; yv )2 Euv

i

1 2

(1.90)

It should be mentioned that in a perceptually uniform space, the Euclidean distance is an accurate measure of the perceptual color di erence [5]. As such,  is widely used for the evaluation of color the color di erence formula Euv reproduction quality in an image processing system, such as color coding systems.

1.9.2 The CIE L ab Color Space

The L ab color space is the second uniform color space standardized by CIE. It is also derived based on the CIE XYZ space and white reference point [5], [37]. The lightness L component is the same as in the L u v space. The L ,  a and b components are given by:

 

L = 116 YY n a = 500

1 3

"  X Xn

; 16 1 3

;

Y  # 1 3

Yn

(1.91) (1.92)

36

1. Color Spaces

b = 200

"  Y Yn

1 3

;

Z # 1 3

(1.93)

Zn

with the constraint that XXn ; YYn ; ZZn > 0:01. This constraint will be satis ed for most practical purposes [4]. Hence, the modi ed formulae described in [5] for cases that do not not satisfy this constraint can be ignored in practice [4], [10]. The back conversion to the XYZ space from the L a b space is done by rst computing the luminance Y , as described in the back conversion of L u v , followed by the computation of X and Z :  L + 16 3 Yn (1.94) Y = 116

X =

a + 500

Z =

b + ; 200

 Y  !3 1 3

Yn

Xn

 Y  !3 1 3

(1.95)

Zn

Yn

(1.96)

The perceptual color distance in the L a b is similar to the one in the L u v . The two color vectors xLa b and yLa b in the L ab space can be represented as: xLab = [xL ; xa ; xb ]T and yLab = [yL ; ya ; yb ]T (1.97) The perceptual color distance (or total color di erence) in the L a b space, Eab , between two color vectors xL u v and yL u v is given by the Euclidean distance (L2 norm): Eab = jjxL a b ; yL a b jjL

h

2

= (xL ; yL )2 + (xa ; ya )2 + (xb ; yb )2

i

1 2

(1.98)

 is applicable to the observing conditions The color di erence formula Eab  . However, this simple normally found in practice, as in the case of Euv di erence formula values color di erences too strongly when compared to experimental results. To correct the problem a new di erence formula was recommended in 1994 by CIE [25], [31]. The new formula is as follows:

E 

  2   2   2 = [ (xL ; yL ) + (xa ; ya ) + (xb ; yb ) ]

1 2

(1.99) K L SL KcSc KH SH where the factors KL, Kc, KH are factors to match the perception of the background conditions, and SL , Sc , SH are linear functions of the di erences  have been in chroma. Standard reference values for the calculation for Eab ab94

94

1.9 Perceptually Uniform Color Spaces

37

speci ed by the CIE. Namely, the values most often in use are KL = Kc = KH = 1, SL = 1, Sc = 1 + 0:045((xa ; ya ) and SH = 1 + 0:015((xb ; yb )

respectively. The parametric values may be modi ed to correspond to typical experimental conditions. As an example, for the textile industry, the KL factor should be 2, and the Kc and KH factors should be 1. For all other applications a value of 1 is recommended for all parametric factors [38].

1.9.3 Cylindrical Lu v and L a b Color Space Any color expressed in the rectangular coordinate system of axes L u v or L a b can also be expressed in terms of cylindrical coordinates with the perceived lightness L and the psychometric correlates of chroma and hue  and that in the [37]. The chroma in the L u v space is denoted as Cuv     L a b space Cab . They are de ned as [5]:

;  ;(a)2 + (b)2

 = (u )2 + (v )2 Cuv

Cab =

1 2

(1.100)

1 2

(1.101) The hue angles are useful quantities in specifying hue numerically [5], [37]. Hue angle huv in the L u v space and hab in the L a b space are de ned as [5]:

 v 

huv = arctan u  b  hab = arctan a The saturation suv in the L u v space is given by:  suv = CLuv 1.9.4 Applications of L uv and La b spaces

(1.102) (1.103) (1.104)

The L u v and L a b spaces are very useful in applications where precise quanti cation of perceptual distance between two colors is necessary [5]. For example in the realization of perceptual based vector order statistics lters. If a degraded color image has to be ltered so that it closely resembles, in perception, the un-degraded original image, then a good criterion to optimize is the perceptual error between the output image and the un-degraded original image. Also, they are very useful for evaluation of perceptual closeness or perceptual error between two color images [4]. Precise evaluation of perceptual closeness between two colors is also essential in color matching systems used in various applications such as multimedia products, image arts, entertainment, and advertisements [6], [14], [22].

38

1. Color Spaces

L u v and L ab color spaces are extremely useful in imaging systems where exact perceptual reproduction of color images (color consistency) across the entire system is of primary concern rather than real-time or simple computing. Applications include advertising, graphic arts, digitized or animated paintings etc. Suppose, an imaging system consists of various color devices, for example video camera/digital scanner, display device, and printer. A painting has to be digitized, displayed, and printed. The displayed and printed versions of the painting must appear as close as possible to the original image. L u v and L a b color spaces are the best to work with in such cases. Both these systems have been successfully applied to image coding for printing [4], [16]. Color calibration is another important process related to color consistency. It basically equalizes an image to be viewed under di erent illumination or viewing conditions. For instance, an image of a target object can only be taken under a speci c lighting condition in a laboratory. But the appearance of this target object under normal viewing conditions, say in ambient light, has to be known. Suppose, there is a sample object whose image under ambient light is available. Then the solution is to obtain the image of the sample object under the same speci c lighting condition in the laboratory. Then a correction formula can be formulated based on the images of the sample object obtained and these can be used to correct the target object for the ambient light [14]. Perceptual based color spaces, such as L a b , are very useful for computations in such problems [31], [37]. An instance, where such calibration techniques have great potential, is medical imaging in dentistry. Perceptually uniform color spaces, with the Euclidean metric to quantify color distances, are particularly useful in color image segmentation of natural scenes using histogram-based or clustering techniques. A method of detecting clusters by tting to them some circular-cylindrical decision elements in the L a b uniform color coordinate system was proposed in [39], [40]. The method estimates the clusters' color distributions without imposing any constraints on their forms. Boundaries of the decision elements are formed with constant lightness and constant chromaticity loci. Each boundary is obtained using only 1-D histograms of the L H C cylindrical coordinates of the image data. The cylindrical coordinates L H C [30] of the L a b color space known as lightness, hue, and chroma, are given by: L = L (1.105)    H = arctan(b =a ) (1.106)   2  2 1 = 2 C = (a + b ) (1.107)    The L a b space is often used in color management systems (CMS). A color management system handles the color calibration and color consistency issues. It is a layer of software resident on a computer that negotiates color reproduction between the application and color devices. Color management systems perform the color transformations necessary to exchange accurate

1.10 The Munsell Color Space

39

color between diverse devices [4], [43]. A uniform, based on CIE L u v , color space named TekHVC was proposed by Tektronix as part of its commercially available CMS [45].

1.10 The Munsell Color Space The Munsell color space represents the earliest attempt to organize color perception into a color space [5], [14], [46]. The Munsell space is de ned as a comparative reference for artists. Its general shape is that of a cylindrical representation with three dimensions roughly corresponding to the perceived lightness, hue and saturation. However, contrary to the HSV or HSI color models where the color solids were parameterized by hue, saturation and perceived lightness, the Munsell space uses the method of the color atlas, where the perception attributes are used for sampling. The fundamental principle behind the Munsell color space is that of equality of visual spacing between each of the three attributes. Hue is scaled according to some uniquely identi able color. It is represented by a circular band divided into ten sections. The sections are de ned as red, yellow-red, yellow, green-yellow, green, blue-green, blue, purple-blue, purple and red-purple. Each section can be further divided into ten subsections if ner divisions of hue are necessary. A chromatic hue is described according to its resemblance to one or two adjacent hues. Value in the Munsell color space refers to a color's lightness or darkness and is divided into eleven sections numbered zero to ten. Value zero represents black while a value of ten represent white. The chroma de nes the color's strength. It is measured in numbered steps starting at one with weak colors having low chroma values. The maximum possible chroma depends on the hue and the value being used. As can be seen in Fig. (1.14), the vertical axis of the Munsell color solid is the line of V values ranging from black to white. Hue changes along each of the circles perpendicular to the vertical axis. Finally, chroma starts at zero on the V axis and changes along the radius of each circle. The Munsell space is comprised of a set of 1200 color chips each assigned a unique hue, value and chroma component. These chips are grouped in such a way that they form a three dimensional solid, which resembles a warped sphere [5]. There are di erent editions of the basic Munsell book of colors, with di erent nishes (glossy or matte), di erent sample sizes and a di erent number of samples. The glossy nish collection displays color point chips arranged on 40 constant-hue charts. On each constant-hue chart the chips are arranged in rows and columns. In this edition the colors progress from light at the top of each chart to very dark at the bottom by steps which are intended to be perceptually equal. They also progress from achromatic colors, such as white and gray at the inside edge of the chart, to chromatic colors at the outside edge of the chart by steps that are also intended to be

40

1. Color Spaces

perceptually equal. All the charts together make up the color atlas, which is the color solid of the Munsell system. Value

Y

GY

Hue

G

YR Chroma R

BG

RP

B PB

P

Fig. 1.14. The Munsell color system

Although the Munsell book of colors can be used to de ne or name colors, in practice is not used directly for image processing applications. Usually stored image data, most often in RGB format, are converted to the Munsell coordinates using either lookup tables or closed formulas prior to the actual application. The conversion from the RGB components to the Munsell hue (H ), value (V ) corresponding to luminance and chroma (C ) corresponding to saturation, can be achieved by using the following mathematical algorithm [47]: x = 0:620R + 0:178G + 0:204B y = 0:299R + 0:587G + 0:144B z = 0:056G + 0:942B (1.108) A nonlinear transformation is applied to the intermediate values as follows: p = f (x) ; f (y) (1.109) q = 0:4(f (z ) ; f (y)) (1.110) where f (r) = 11:6r ; 1:6. Further the new variables are transformed to: s = (a + bcos())p (1.111) t = (c + dsin())q (1.112) p where  = tan;1 ( q ), a = 8:880, b = 0:966, c = 8:025 and d = 2:558. Finally, the requested values are obtained as: 1 3

H = arctan( st )

(1.113)

1.11 The Opponent Color Space

V = f (y) and

C = (s2 + t2 )

41

(1.114) 1 2

(1.115) Alternatively, conversion from RGB, or other color spaces, to the Munsell color space can be achieved through look-up tables and published charts [5]. In summary, the Munsell color system is an attempt to de ne color in terms of hue, chroma and lightness parameters based on subjective observations rather than direct measurements or controlled perceptual experiments. Although it has been found that the Munsell space is not as perceptually uniform as originally claimed and, despite the fact that it cannot directly integrate with additive color schemes, it is still in use today despite attempts to introduce colorimetric models for its replacement.

1.11 The Opponent Color Space The opponent color space family is a set of physiologically motivated color spaces inspired by the physiology of the human visual system. According to the theory of color vision discussed in [48] the human vision system can be expressed in terms of opponent hues, yellow and blue on one hand and green and red on the other, which cancel each other when superimposed. In [49] an experimental procedure was developed which allowed researchers to quantitatively express the amounts of each of the basic hues present in any spectral stimulus. The color model of [50], [51], [52], [44] suggests the transformation of the RGB `cone' signals to three channels, one achromatic channel (I) and two opponent color channels (RG, YB) according to:

RG = R ; G Y B = 2B ; R ; G I =R+G+B

(1.116) (1.117) (1.118) At the same time a set of e ective color features was derived by systematic experiments of region segmentation [53]. According to the segmentation procedure of [53] the color which has the deep valleys on its histogram and has the largest discriminant power to separate the color clusters in a given region need not be the R, G, and B color features. Since a feature is said to have large discriminant power if its variance is large, color features with large discriminant power were derived by utilizing the Karhunen-Loeve (KL) transformation. At every step of segmenting a region, calculation of the new color features is done for the pixels in that region by the KL transform of R, G, and B signals. Based on extensive experiments [53], it was concluded

42

1. Color Spaces

Cones

Opponent Signals

R R+G+B

G

R-G

B

2B-R-G

Fig. 1.15. The Opponent color stage of the human visual system that three color features constitute an e ective set of features for segmenting color images, [54], [55]:

I 1 = (R + G3 + B ) I 2 = (R ; B ) I 3 = (2G ; 2R ; B )

(1.119) (1.120) (1.121)

In the opponent color space hue could be coded in a circular format ranging through blue, green, yellow, red and black to white. Saturation is de ned as distance from the hue circle making hue and saturation speciable with in color categories. Therefore, although opponent representation are often thought as a linear transforms of RGB space, the opponent representation is much more suitable for modeling perceived color than RGB is [14].

1.12 New Trends The plethora of color models available poses application diculties. Since most of them are designed to perform well in a speci c application, their performance deteriorates rapidly under di erent operating conditions. Therefore, there is a need to merge the di erent (mainly device dependent) color spaces into a single standard space. The di erences between the monitor RGB space and device independent spaces, such as the HVS and the CIE L ab spaces impose problems in applications, such as multimedia database navigation and face recognition primarily due to the complexity of the operations needed to support the transform from/to device dependent color spaces. To overcome such problems and to serve the needs of network-centric applications and WWW-based color imaging systems, a new standardized color space based on a colorimetric RGB (sRGB) space has recently been proposed [56]. The aim of the new color space is to complement the current color space

1.12 New Trends

43

management strategies by providing a simple, yet ecient and cost e ective method of handling color in the operating systems, device drivers and the Web using a simple and robust device independent color de nition. Since most computer monitors are similar in their key color characteristics and the RGB space is the most suitable color space for the devices forming a modern computer-based imaging systems, the colorimetric RGB space seems to be the best candidate for such a standardized color space. In de ning a colorimetric color space, two factors are of paramount importance:  the viewing environment parameters with its dependencies on the Human Visual System  the standard device space colorimetric de nitions and transformations [56] The viewing environment descriptions contain all the necessary transforms needed to support conversions between standard and target viewing environments. On the other hand, the colorimetric de nitions provide the transforms necessary to convert between the new sRGB and the CIE-XYZ color space. The reference viewing environment parameters can be found in [56] with the sRGB tristimulus values calculated from the CIE-XYZ values according to the following transform: 2 R 3 2 3:2410 ;1:5374 ;0:4986 3 2 X 3 4 GsRGB (1.122) sRGB 5 = 4 ;0:9692 1:8760 0:0416 5 4 Y 5 BsRGB 0:0556 ;0:2040 1:0570 Z In practical image processing systems negative sRGB tristimulus values and sRGB values greater than 1 are not retained and typically removed by utilizing some form of clipping. In the sequence, the linear tristimulus values are transformed to nonlinear sR0 G0 B0 as follows: 1. If RsRGB ; GsRGB ; BsRGB 0:0034 then sR0 = 12:92RsRGB (1.123) sG0 = 12:92GsRGB (1.124) sB 0 = 12:92BsRGB (1.125) 2. else if RsRGB ; GsRGB ; BsRGB > 0:0034 then

sR0 = 1:055RsRGB :: ; 0:055 sG0 = 1:055GsRGB :: ; 0:055 sB 0 = 1:055BsRGB :: ; 0:055 1 0 2 4

(1.126)

1 0 2 4

(1.127)

1 0 2 4

(1.128)

44

1. Color Spaces

The e ect of the above transformation is to closely t a straightforward value of 2.2 with a slight o set to allow for invertibility in integer mathematics. The nonlinear R0 G0 B0 values are then converted to digital values with a black digital count of 0 and a white digital count of 255 for 24-bit coding as follows: sRd = 255:0sR0 (1.129) sGd = 255:0sG0 (1.130) sBd = 255:0sB 0 (1.131) The backwards transform is de ned as follows: sR0 = sRd + 255:0 (1.132) sG0 = sGd + 255:0 (1.133) sB 0 = sBd + 255:0 (1.134) and 1. if RsRGB ; GsRGB ; BsRGB 0:03928 then RsRGB = sR0 + 12:92 (1.135) GsRGB = sG0 + 12:92 (1.136) BsRGB = sB 0 + 12:92 (1.137) 2. else if RsRGB ; GsRGB ; BsRGB > 0:03928 then 0 0:055 2:4 RsRGB = ( sR 1+:055 ) (1.138) 0 0:055 2:4 ) (1.139) GsRGB = ( sG 1+:055 0 0:055 2:4 ) (1.140) BsRGB = ( sB 1+:055 with 2 X 3 2 0:4124 0:3576 0:1805 3 2 R 3 4 Y 5 = 4 0:2126 0:7152 0:0722 5 4 GsRGB (1.141) sRGB 5 Z 0:0193 0:1192 0:9505 BsRGB The addition of a new standardized color space which supports Webbased imaging systems, device drivers, printers and monitors complementing the existing color management support can bene t producers and users alike by presenting a clear path towards an improved color management system.

1.14 Summary

45

1.13 Color Images Color imaging systems are used to capture and reproduce the scenes that humans see. Imaging systems can be built using a variety of optical, electronic or chemical components. However, all of them perform three basic operations, namely: (i) image capture, (ii) signal processing, and (iii) image formation. Color-imaging devices exploit the trichromatic theory of color to regulate how much light from the three primary colors is absorbed or re ected to produce a desired color. There are a number of ways to acquiring and reproducing color images, including but not limited to:  Photographic lm. The lm which is used by conventional cameras contains three emulation layers, which are sensitive to red and blue light, which enters through the camera lens.  Digital cameras. Digital cameras use a CCD to capture image information. Color information is captured by placing red, green and blue lters before the CCD and storing the response to each channel.  Cathode-Ray tubes. CRTs are the display device used in televisions and computer monitors. They utilize a extremely ne array of phosphors that emit red, green and blue light at intensities governed by an electron gun, in accordance to an image signal. Due to the close proximity of the phosphors and the spatial ltering characteristics of the human eye, the emitted primary colors are mixed together producing an overall color.  Image scanners. The most common method of scanning color images is the utilization of three CCD's each with a lter to capture red, green and blue light re ectance. These three images are then merged to create a copy of the scanned image.  Color printers. Color printers are the most common method of attaining a printed copy of a captured color image. Although the trichromatic theory is still implemented, color in this domain is subtractive. The primaries which are used are usually cyan, magenta and yellow. The amount of the three primaries which appear on the printed media govern how much light is re ected.

1.14 Summary In this chapter the phenomenon of color was discussed. The basic color sensing properties of the human visual system and the CIE standard color speci cation system XYZ were described in detail. The existence of three types of spectral absorption cones in the human eyes serves as the basis of the trichromatic theory of color, according to which all visible colors can be created by combining three . Thus, any color can be uniquely represented by a three dimensional vector in a color model de ned by the three primary colors.

46

1. Color Spaces

Table 1.3. Color Model Color System RGB R0 G0 B0 XYZ YIQ YCC I1I2I3 HSV HSI HLS L u v L a b Munsell

Transform (from RGB) non linear linear linear linear linear non linear non linear non linear non linear non linear non linear

Component correlation highly correlated correlated uncorrelated uncorrelated correlated correlated correlated correlated correlated correlated correlated

Fig. 1.16. A taxonomy of color models Color speci cation models are of paramount importance in applications where ecient manipulation and communication of images and video frames are required. A number of color speci cation models are in use today. Examples include color spaces, such as the RGB, R0 G0 B0 , YIQ, HSI, HSV, HLS,L u v , and L ab . The color model is a mathematical representation of spectral colors in a nite dimensional vector space. In each one of them the actual color is reconstructed by combining the basis elements of the vector

References

47

Color Spaces Models Colorimetric XYZ non-uniform spaces Device-oriented -RGB, YIQ, YCC User-oriented Munsell

- uniform spaces L a b , L u v HSI, HSV, HLS, I1I2I3

Applications

colorimetric calculations storage, processing, analysis coding, color TV, storage (CD-ROM) color di erence evaluation analysis, color management systems human color perception multimedia, computer graphics human visual system

spaces, the so called primary colors. By de ning di erent primary colors for the representation of the system di erent color models can be devised. One important aspect is the color transformation, the change of coordinates from one color system to another (see Table 1.3). Such a transformation associates to each color in one system a color in the other model. Each color model comes into existence for a speci c application in color image processing. Unfortunately, there is no technique for determining the optimum coordinate model for all image processing applications. For a speci c application the choice of a color model depends on the properties of the model and the design characteristics of the application. Table 1.14 summarizes the most popular color systems and some of their applications.

References 1. Gonzalez, R., Woods, R.E. (1992): Digital Image Processing. Addisson Wesley, Reading MA. 2. Robertson, P., Schonhut, J. (1999): Color in computer graphics. IEEE Computer Graphics and Applications, 19(4), 18-19. 3. MacDonald, L.W. (1999): Using color e ectively in computer graphics. IEEE Computer Graphics and Applications, 19(4), 20-35. 4. Poynton, C.A. (1996): A Technical Introduction to Digital Video. Prentice Hall, Toronto, also available at http://www.inforamp.net/poynton/Poynton{ Digital-Video.html . 5. Wyszecki, G., Stiles, W.S. (1982): Color Science, Concepts and Methods, Quantitative Data and Formulas. John Wiley, N.Y. , 2nd Edition. 6. Hall, R.A. (1981): Illumination and Color in Computer Generated Imagery. Springer Verlag, New York, N.Y. 7. Hurlbert, A. (1989): The Computation of Color. Ph.D Dissertation, Massachusetts Institute of Technology. 8. Hurvich, Leo M. (1981): Color Vision. Sinauer Associates, Sunderland MA. 9. Boynton, R.M. (1990): Human Color Vision. Halt, Rinehart and Winston. 10. Gomes, J., Velho, L. (1997): Image Processing for Computer Graphics. Springer Verlag, New York, N.Y., also available at http://www.springerny.com/catalog/np/mar97np/DATA/0-387-94854-6.html .

48

1. Color Spaces

11. Fairchild, M.D. (1998): Color Appearance Models. Addison-Wesley, Readings, MA. 12. Sharma, G., Yrzel, M.J., Trussel, H.J. (1998): Color imaging for multimedia. Proceedings of the IEEE, 86(6): 1088{1108. 13. Sharma, G., Trussel, H.J. (1997): Digital color processing. IEEE Trans. on Image Processing, 6(7): 901-932. 14. Lammens, J.M.G. (1994): A Computational Model for Color Perception and Color Naming. Ph.D Dissertation, State University of New York at Bu alo, Bu alo, New York. 15. Johnson, G.M., Fairchild, M.D. (1999): Full spectral color calculations in realistic image synthesis. IEEE Computer Graphics and Applications, 19(4), 47-53. 16. Lu, Guoyun (1996): Communication and Computing for Distributed Multimedia Systems. Artech House Publishers, Boston, MA. 17. Kubinger, W., Vincze, M., Ayromlou, M. (1998): The role of gamma correction in colour image processing. in Proceedings of the European Signal Processing Conference, 2: 1041{1044. 18. Luong, Q.T. (1993): Color in computer vision. in Handbook of Pattern Recognition and Computer Vision, Word Scienti c Publishing Company): 311{368. 19. Young, T. (1802): On the theory of light and colors. Philosophical Transactions of the Royal Society of London, 92: 20{71. 20. Maxwell, J.C. (1890): On the theory of three primary colors. Science Papers 1, Cambridge University Press: 445{450. 21. Padgham, C.A., Saunders, J.E. (1975): The Perception of Light and Color. Academic Press, New York, N.Y. 22. Judd, D.B., Wyszecki, G. (1975): Color in Business, Science and Industry. John Wiley, New York, N.Y. 23. Foley, J.D., vanDam, A., Feiner, S.K., Hughes, J.F. (1990): Fundamentals of Interactive Computer Graphics. Addison Wesley, Reading, MA. 24. CCIR (1990): CCIR Recommendation 709. Basic parameter values for the HDTV standard for studio and for international program exchange. Geneva, Switcherland. 25. CIE (1995): CIE Publication 116. Industrial color-di erence evaluation. Vienna, Austria. 26. Poynton, C.A. (1993): Gamma and its disguises. The nonlinear mappings of intensity in perception, CRTs, lm and video. SMPTE Journal: 1099{1108. 27. Kasson M.J., Ploa e, W. (1992): An analysis of selected computer interchange color spaces. ACM Transaction of Graphics, 11(4): 373-405. 28. Shih, Tian-Yuan (1995): The reversibility of six geometric color spaces. Photogrammetric Engineering and Remote Sensing, 61(10): 1223{1232. 29. Levkowitz H., Herman, G.T. (1993): GLHS: a generalized lightness, hue and saturation color model. Graphical Models and Image Processing, CVGIP-55(4): 271{285. 30. McLaren, K. (1976): The development of the CIE L a b uniform color space. J. Soc. Dyers Colour, 338{341. 31. Hill, B., Roer, T., Vorhayen, F.W. (1997): Comparative analysis of the quantization of color spaces on the basis of the CIE-Lab color di erence formula. ACM Transaction of Graphics, 16(1): 110{154. 32. Hall, R. (1999): Comparing spectral color computation methods. IEEE Computer Graphics and Applications, 19(4), 36-44. 33. Hague, G.E., Weeks, A.R., Myler, H.R. (1995): Histogram equalization of 24 bit color images in the color di erence color space. Journal of Electronic Imaging, 4(1), 15-23.

References

49

34. Weeks, A.R. (1996): Fundamentals of Electronic Image Processing. SPIE Press, Piscataway, New Jersey. 35. Benson, K. B. (1992): Television Engineering Handbook. McGraw-Hill, London, U.K. 36. Smith, A.R. (1978): Color gamut transform pairs. Computer Graphics (SIGGRAPH'78 Proceedings), 12(3): 12{19. 37. Healey, C.G., Enns, J.T. (1995): A perceptual color segmentation algorithm. Technical Report, Department of Computer Science, University of British Columbia, Vancouver. 38. Luo, M. R. (1998): Color science. in Sangwine, S.J., Horne, R.E.N. (eds.), The Colour Image Processing Handbook, 26{52, Chapman & Hall, Cambridge, Great Britain. 39. Celenk, M. (1988): A recursive clustering technique for color picture segmentation. Proceedings of the Int. Conf. on Computer Vision and Pattern Recognition, 1: 437{444. 40. Celenk, M. (1990): A color clustering technique for image segmentation. Computer Vision, Graphics, and Image Processing, 52: 145{170. 41. Cong, Y. (1998): Intelligent Image Databases. Kluwer Academic Publishers, Boston, Ma. 42. Ikeda, M. (1980): Fundamentals of Color Technology. Asakura Publishing, Tokyo, Japan. 43. Rhodes, P. A. (1998): Colour management for the textile industry. in Sangwine, S.J., Horne, R.E.N. (eds.), The Colour Image Processing Handbook, 307-328, Chapman & Hall, Cambridge, Great Britain. 44. Palus, H. (1998): Colour spaces. in Sangwine, S.J., Horne, R.E.N. (eds.), The Colour Image Processing Handbook, 67{89, Chapman & Hall, Cambridge, Great Britain. 45. Tektronix (1990): TekColor Color Management System: System Implementers Manual. Tektronix Inc. 46. Birren, F. (1969): Munsell: A Grammar of Color. Van Nostrand Reinhold, New York, N.Y. 47. Miyahara, M., Yoshida, Y. (1988): Mathematical transforms of (R,G,B) colour data to Munsell (H,V,C) colour data. Visual Communications and Image Processing, 1001, 650{657. 48. Hering, E. (1978): Zur Lehe vom Lichtsinne. C. Gerond's Sohn, Vienna, Austria. 49. Jameson, D., Hurvich, L.M. (1968): Opponent-response functions related to measured cone photo pigments. Journal of the Optical Society of America, 58: 429{430. 50. de Valois, R.L., De Valois, K.K. (1975): Neural coding of color. in Carterette, E.C., Friedman, M.P. (eds.), Handbook of Perception. Volume 5, Chapter 5, 117{166, Academic Press, New York, N.Y. 51. de Valois, R.L., De Valois, K.K. (1993): A multistage color model. Vision Research 33(8): 1053{1065. 52. Holla, K. (1982): Opponent colors as a 2-dimensional feature within a model of the rst stages of the human visual system. Proceedings of the 6th Int. Conf. on Pattern Recognition, 1: 161{163. 53. Ohta, Y., Kanade, T., Sakai, T. (1980): Color information for region segmentation. Computer Graphics and Image Processing, 13: 222{241. 54. von Stein, H.D., Reimers, W. (1983): Segmentation of color pictures with the aid of color information and spatial neighborhoods. Signal Processing II: Theories and Applications, 1: 271{273. 55. Tominaga S. (1986): Color image segmentation using three perceptual attributes. Proceedings of CVPR'86, 1: 628-630.

50

1. Color Spaces

56. Stockes, M., Anderson, M., Chandrasekar, Sri., Motta, Ricardo (1997): A standard default color space for the Internet sRGB. International Color Consortium (ICC), contributed document electronic reprint (http://www.color.org).

2. Color Image Filtering

2.1 Introduction The function of a lter is to transform a signal into another more suitable for a given purpose [1]. As such, lters nd applications in image processing, computer vision, telecommunications, geophysical signal processing and biomedicine. However, the most popular application of ltering is the process of detecting and removing unwanted noise from a signal of interest. Noise affects the perceptual quality of the image decreasing not only the appreciation of the image but also the performance of the task for which the image was intended. Therefore, ltering is an essential part of any image processing system whether the nal product is used for human inspection, such as visual inspection, or for an automatic analysis. Noise introduces random variations into sensor readings, making them di erent from the ideal values, and thus introducing errors and undesirable side e ects in subsequent stages of the image processing process. Noise may result from sensor malfunction, imperfect optics, electronic interference, or

aws in the data transmission procedure. In considering the signal-to-noise ratio over practical communication media, such as microwave or satellite links, there would be a degradation in quality due to low received signal power. Degradation of the image quality can also be a result of processing techniques, such as aperture correction, which ampli es both high frequency signals and noise [2], [3], [4]. In many cases, the noise characteristics vary within the same application. Such cases are the channel noise in image transmission as well as atmospheric noise corrupting multichannel satellite images. The noise encountered in digital image processing applications cannot always be described in terms of the commonly assumed Gaussian model. It can however, be characterized in terms of impulsive sequences which occur in the form of short duration, high energy spikes attaining large amplitudes with probability higher than that predicted by a Gaussian density model [5], [6], [7]. Thus, it is desirable for image lters to be robust to impulsive or generally heavy-tailed, nonGaussian noise [1], [8]. In addition, when processing color images to remove noise, care must be taken to retain the chromatic information. The di erent lters applied to color images are required to preserve chromaticity, edges

52

2. Color Image Filtering

and ne image details. The preservation and the possible enhancement of these features is of paramount importance during processing. Before the di erent ltering techniques developed over the last ten years to suppress noise are examined, the di erent kinds of noise corrupting color images should be de ned. It is shown how they can be quanti ed and used in the context of digital color image processing. Statistical tools and techniques consistent with the color representation models which form the basis for most of the color image lters discussed in the second part of this chapter are also considered.

2.2 Color Noise Based on the trichromatic theory of color, color images are encoded as scalar values in the three color channels, namely, red, green and blue. Color sensors, as any other sensor, can be a ected by noise due to malfunction, interference or design aw. As a result, instead of recording the ideal color value, a random

uctuation around this value is registered by each color channel. Although it is relatively easy to treat noise in the three chromatic channels separately and apply existing gray scale ltering techniques to reduce the scalar noise magnitudes, a di erent treatment of noise in the context of color images is needed. Color noise can be viewed as color uctuation given to a certain color signal. As such the color noise signal should be considered as a 3-channel perturbation vector in the RGB color space, a ecting the spread of the actual color vectors in the space [2]. Image sensors can be divided into two categories, photochemical and photoelectronic sensors [1]. The positive and negative photographic lms are typical photochemical sensors. Although they have the advantage that they can detect and record the image at the same time, the image that they produce cannot by easily digitized. In photochemical sensors, such as lms, the noise is mainly due to the silver grains that precipitate during the lm exposure. They behave randomly during the lm exposure and development and experimental studies have shown that this noise, often called lm grain noise, can be modeled in its limit as a Poisson process or Gaussian process [9]. This type of noise is particularly dominant in images acquired with high speed lm due to the lm's large silver halide grain size. In addition to the lm grain noise, photographic noise is due to dust that collects on the optics and the negatives during the lm developing process [10]. Photoelectronic sensors have the advantage over the lm that they can be used to drive an image digitizer directly. Among the several photoelectronic sensors, such as standard vidicon tubes, Charge Injection Devices (CID), Charge Coupled Devices (CCD), and silicon vidicon tubes, CCDs are the most extensively used [11]. CCD cameras consist of a two-dimensional array of solid state light sensing elements, the so-called cells. The incident light induces electric charges in each cell. These charges are shifted to the right

2.3 Modeling Sensor Noise

53

from cell to cell by using a two-phase clock and they come to the read-out register. The rows of cells are scanned sequentially during a vertical scan and thus the image is recorded and sampled simultaneously. In photoelectronic sensors two kinds of noise appear, namely: (i) thermal noise, due to the various electronic circuits, which is usually modeled as additive white, zeromean, Gaussian noise and (ii) photoelectronic noise, which is produced by the random uctuation of the number of photons on the light sensitive surface of the sensor. Assuming a low level of uctuation, it has a Bose-Einstein statistic and is modeled by a Poisson like distribution. On the other hand, when its level is high, the noise can be modeled as Gaussian process with standard deviation equal to the square root of the mean. In the particular case of CCD cameras, transfer loss noise is also present. In CCD technology, charges are transfered from one cell to the other. However, in practice, this process is not complete. A fraction of the charges is not transferred and it represents the transfer noise. The noise occurs along the rows of cells and therefore, has strong horizontal correlation. It usually appears as a white smear located on one side of a bright image spot. Other types of noise due to capacitance coupling of clock lines and output lines or due to noisy cell re-charging are also present to the CCD camera [1].

2.3 Modeling Sensor Noise This section focuses on a thermal type of noise which for analysis purposes it is assumed that the scalar (gray scale) sensor noise, is white Gaussian in nature, having the following probability distribution function: 2 p(xn ) = N (0; ) = 1 exp ( ;2x2 ) (2.1) (2) 1 2

It can be reasonably assumed that all three color sensors have the same zero average noise magnitude with constant noise variance 2 over the entire image plane. To further simplify the analysis, it is assumed that the noise signals corrupting the three color channels are uncorrelated. Let the noise perturbation vector in the RGB color space be denoted as p = (r2 + g2 + b2) , where r; g; b are the scalar perturbation quantities (magnitudes) in the red, green and blue chromatic channels respectively. Based on the assumption of identical noise distributions of variance 2 , for the noise corrupting signal in the three sensors, it can be expected that the noise perturbation vector has a spatial probability density function, which depends only on the value of the perturbation magnitude p as follows: 1 2

pr (p) =

Z p Z (p ;r ) 2

2 1 2

;p ;(p2 ;r2 ) 21

prr prg [prb p + prb (;p )] dg dr

(2.2)

54

2. Color Image Filtering

1 ) exp ( ;p ) pr (p) = ( (2 ) 22 3

2

(2.3)

with prr = prg = (21 ) exp ( ;2x ) and p = (p2 ; r2 ; g2 ) . The probability distribution function has its peak values at p = (2) , unlike the scalar zero-mean noise functions assumed at the beginning. In practical terms, that suggests that if a non-zero scalar noise distribution exists in an individual channel of a color sensor, then the RGB reading will be corrupted by noise, and the registered values will be di erent than the original ones [2]. Short-tailed, thermal noise modeled as Gaussian distribution is not the only type of noise corrupting color images. In some cases, ltering schemes under a di erent noise scenario need to be evaluated. One such possible scenario is the presence of noise modeled after a long tailed distribution, such as exponential or Cauchy distribution [1]. In gray scale image processing, the bi-exponential distribution is used for this purpose. The distribution has the form of p(x) = 2 exp (;jxj), with 0. For the case of color images, with the three channels, the multivariate analogous with the Euclidean distance is used instead of the absolute value used in the single channel case [4], [12]. That gives a spherically symmetric exponential distribution of: 1 2

2

1 2

2

1 2

p(x) = K exp (;(r2 + g2 + b2 ) ) 1 2

(2.4) For this to be a valid probability distribution, K must be selected, such that

Z 1Z 1Z 1

;1 ;1 ;1

p(x) dr dg db = 1

(2.5)

Combining the above two equations and transforming to spherical coordinates the following is obtained:

p(x) =

Z 2 Z  Z 1 0 3

0

0

K exp (;rd )rd 2 sin() drd d d = 1

p(x) = 8 exp (;(r2 + g2 + b2 ) ) 1 2

(2.6) (2.7)

where rd is the length of the color vector in spherical coordinates. Evaluating the rst and second moments of the distribution as: ni = E [Xi ] = 0, i = 1; 2; 3, ii = E [x2i ] = 4 , i = 1; 2; 3, and ij = E [xi xj ] = 0, i6=j , i; j = 1; 2; 3 and re-writing  = 2 the distribution takes the following form: p (p) = 1 exp ( ;2 (r2 + g2 + b2 ) ) (2.8) 2

r

3



1 2

2.4 Modeling Transmission Noise

55

2.4 Modeling Transmission Noise Recording noise is not the only kind of noise encountered during the process. Image transmission noise is also present and there are various sources that can generate this type of noise. Among others, there are man made phenomena, such as car ignition systems, industrial machines in the vicinity of the receiver, switching transients in power lines and various unprotected switches. In addition, natural causes, such as lightning in the atmosphere and ice cracking in the antarctic region, can also a ect the transmission process. The transmission noise also known in the case of gray scale imaging as saltpepper noise, is modeled after an impulsive distribution. However, a problem in the study of the e ect of the noise in the image processing process is the lack of model of multivariate impulsive noise. A number of simpli ed models have been introduced recently to assist in the performance evaluation of the di erent color image lters. The three-variate impulsive noise model considered here is as follows [13], [14]: 8s with probability (1 ; p) > > < (d; s2; s3) with probability p1p n(x) = > (s1 ; d; s3 ) with probability p2 p (2.9) ( s ; s ; d ) with probability p p > 1 2 3 : (d; d; d) with probability p p where n(x) is the noisy signal, s = (s1 ; s2 ; s3 ) is the noise free color vector, d is the impulse value and p = 1 ; p1 ; p2 ; p3 (2.10) P where 3i=1 pi 1 is the impulsive noise degree of contamination. Impulse d can have either positive or negative values. It is further assumed that ds1 s2 s3 and that the delta functions are situated at (+255; ;255). Thus, when an impulse is added or subtracted forcing the pixel value outside the [0; 255] range, clipping is applied to force the corrupted noise value into the integer range speci ed by the 8-bit arithmetic. In many practical situations an image is often corrupted by both additive Gaussian noise due to faulty sensors and transmission noise introduced by environmental interference or faulty communication. Thus, an image can be thought as corrupted by mixed noise according to the following model:  s(x) + n(x) with probability (1 ; p ) I y(x) = n (x) (2.11) otherwise I where s(x) is the noise-free 3 ; variate color signal with the additive noise n(x) modeled as zero mean white Gaussian noise and nI (x) transmission noise modeled as multivariate impulsive noise with pI the impulsive noise degree of contamination [14], [15]. From the discussion above, it can be concluded

56

2. Color Image Filtering

that the simplest model in color image processing, and the most commonly used, is the additive noise model. According to this model, it is assumed that variations in image colors are gradual. Thus, pixels which are signi cantly di erent from their neighbors can be attributed to noise. Therefore, most image ltering techniques attempt to replace those atypical readings, usually called outliers, with values derived from nearby pixels. Based on this principle, several ltering techniques have been proposed over the years. Each di erent lter discussed in this chapter considers color images as discrete twodimensional sequences of vectors [y(N1 ; N2 ); N1 ; N2 2Z ]. In general, a color pixel y is a p-variate vector signal, with p = 3, when a color model such as RGB is considered. The index Z is the set of all integers Z = (:::; ;1; 0; 1; :::). For simplicity, let k = (N1 ; N 2 ), where k2Z 2 . Each multivariate, image pixel yk = [y1(k); y2(k); :::; yp (k)] , belongs to a pth dimensional vector space Rp. Let the set of image vectors spanned by a n = (2N + 1)(2N + 1) window centered at k be de ned as W (n) . The color image lters will operate on the window's center sample yk and this window will be moved across the set of vectors in W (n) in the image plane in a raster scan fashion [25] with W  (n) denoting the set of vectors in W (n) without the center pixel yk . At a given image location the set of vectors yi , i = 1; 2; :::; n which is the result of a constant vector-valued signal x = [x1 ; x2 ; :::; xp ] corrupted by  additive zero-mean, p-channel noise nk = [n1 ; n2 ; :::; np ] are accounted for by [16], [4], [3]: yk = x + nk (2.12) The noise vectors are distributed according to some joint distribution function f (n). Furthermore, the noise vectors at di erent instants are assumed to be independently and identically distributed (i.i.d) and uncorrelated to the constant signal. As it was explained before, some of the observed color signal values have been altered due to the noise. The objective of the di erent ltering structures is to eliminate these outlying observations or reduce their in uence without disturbing those color vectors which have not been signi cantly corrupted by noise. Several ltering techniques have been proposed over the years. Among them, there are the linear processing techniques, whose mathematical simplicity and existence of a unifying theory make their design and implementation easy. Their simplicity, in addition to their satisfactory performance in a variety of practical applications, has made them methods of choice for many years. However, most of these techniques operate under the assumption that the signal is represented by a stationary model, and thus try to optimize the parameters of a system suitable for such a model. However, many signal processing problems cannot be solved eciently by using linear techniques. Unfortunately, linear processing techniques fail in image processing, since they cannot cope with the nonlinearities of the image formation model and cannot take into account the nonlinear nature of the human visual system [1]. Image signals are composed of at regional parts and abruptly

2.4 Modeling Transmission Noise

57

changing areas, such as edges , which carry important information for visual perception. Filters having good edge and image detail preservation properties are highly suitable for image ltering and enhancement. Unfortunately, most of the linear signal processing techniques tend to blur edges and to degrade lines, edges and other ne image details [1]. The need to deal with increasingly complex nonlinear systems coupled with the availability of increasing computing power has led to a reevaluation of the conventional ltering methodologies. New algorithms and techniques which can take advantage of the increase in computing power and which can handle more realistic assumptions are needed. To this end, nonlinear signal processing techniques have been introduced more recently. Nonlinear techniques, theoretically, are able to suppress non-Gaussian noise, to preserve important signal elements, such as edges and ne details, and eliminate degradations occurring during signal formation or transmission through nonlinear channels. In spite of an impressive growth in the past two decades, coupled with new theoretical results, the new tools and emerging applications, nonlinear ltering techniques still lack a unifying theory that can encompass existing nonlinear processing techniques. Instead, each class of nonlinear operators possesses its own mathematical tools which can provide a reasonably good analysis of its performance. As a consequence, a multitude of non-linear signal processing techniques have appeared in the literature. At present the following classes of nonlinear processing techniques can be identi ed:  polynomial based techniques [17], [18]  homomorphic techniques [1], [19].  techniques based on mathematical morphology [20], [21], [22], [23]  order statistic based techniques [24], [1], [25] Polynomial lters, especially second order Volterra lters (quadratic lters), have been used for color image ltering, nonlinear channel modeling in telecommunications as well as in multichannel geophysical signal processing. Homomorphic lters and their extensions are one of the rst classes of nonlinear lters and have been used extensively in digital image and signal processing. This lter class has been used in various practical applications, such as multiplicative and signal dependent noise removal, color image processing, multichannel satellite image processing and identi cation of ngerprints. Their basic characteristic is that they use nonlinearities (mainly the logarithm) to transform nonlinearly related signals to additive signals and then to process them by linear lters. The output of the linear lter is then transformed afterwards by the inverse nonlinearity. Morphological lters utilize geometric rather than analytical features of signals. Mathematical morphology can be described geometrically in terms of the actions of the operators on binary, monochrome or color images. The geometric description depends on small synthetic images called structuring elements. This form of mathematical morphology, often called structural morphology, is highly useful in

58

2. Color Image Filtering

the analysis and processing of images. Morphological lters are found in image processing and analysis applications. Speci cally, areas of applications include image ltering, image enhancement and edge detection. However, the most popular family of nonlinear lters is that of the order statistics lters. The theoretical basis of order statistics lters is the theory of robust statistics [26], [27]. There exist several lters which are members of this class. The vector median lter (VMF) is the best known member of this family [24], [28]. The rationale of the approach is that unrepresentative or outlying observations in sets of color vectors can be seen as contaminating the data and thus hampering the methods of signal restoration. Therefore, the di erent order statistics based lters provide the means of interpreting or categorizing outliers and methods for handling them, either by rejecting them or by adopting methods of reducing their impact. In most cases, the lter employs some method of inference to minimize the in uence of any outlier rather than rejecting or including it into our working data set. Outliers can be de ned in scalar, univariate data samples although outliers exist in multivariate data, such as color image vectors [29]. The fundamental notion of an outlier as an observation which is statistically unexpected in terms of some basic model can also be extended to multivariate data and to color signals in particular. However, the expression of this notion and the determination of the appropriate procedures to identify and accommodate outliers is by no means as straightforward when more than one dimension is operated in, mainly due to the fact that a multivariate outlier no longer has a simple manifestation of an observation which deviates the most from the rest of the samples [30]. In univariate data analysis there is a natural ordering of data, which enables extreme values to be identi ed and the distance of these outlying values from the center to be computed easily. As such, the problem of identifying and isolating any individual values which are atypical of those in the rest of the data set is a simple one. For this reason, a plethora of ltering techniques based on the concept of univariate ordering have been introduced. The popularity and the wide spread use of scalar order statistic lters lead to the introduction of similar techniques for the analysis of multivariate, multichannel signals, such as color vectors. However, in order for such lters to be devised the problem of ordering multivariate data should be solved. In this chapter techniques and methodologies for ordering multivariate signals with particular emphasis on color image signals are introduced, examined and analyzed. The proposed ordering schemes will then be used to de ne a number of nonlinear, multichannel digital lters suitable for color images.

2.5 Multivariate Data Ordering Schemes A multivariate signal is a signal where each sample has multiple components. It is also called a vector valued, multichannel or multispectral signal. Color

2.5 Multivariate Data Ordering Schemes

59

images are typical examples of multivariate signals. A color image represented by the three primaries in the RGB coordinate system is a two-dimensional three-variate (three-channel) signal [12], [14], [35], [36]. Let X denote a pdimensional random variable, e.g. a p-dimensional vector of random variables X = [X1; X2; :::; Xp] . The probability density function (pdf) and the cumulative density function (cdf) of this p-dimensional random variable will be denoted by f (X) and F (X), respectively. Now let x1 ; x2 ; :::; xn be n random samples from the multivariate X. Each one of the xi are p-dimensional vectors of observations xi = [xi1 ; xi2 ; :::; xip ] . The goal is to arrange the n values (x1 ; x2 ; :::; xn ) in some sort of order. The notion of data ordering, which is natural in the one dimensional case, does not extend in a straightforward way to multivariate data, since there is no unambiguous, universally acceptable way to order n multivariable samples. Although no such unambiguous form of ordering exists, there are several ways to order the data, the so called subordering principles. The role of sub-ordering principles in multivariate data analysis was given in [34], [29]. Since, in e ect, ranking procedures isolate outliers by properly weighting each ranked multivariate sample, these outliers can be discorded. The sub ordering principles are useful in detecting outliers in a multivariate sample set. Univariate data analysis is sucient to detect any outliers in the data in terms of their extreme value relative to an assumed basic model and then employ a robust accommodation method of inference. For multivariate data however, an additional step in the process is required, namely the adaption of the appropriate sub-ordering principle as the basis for expressing extremeness of observations. The sub-ordering principles are categorized in four types: 1. marginal ordering or M-ordering [34], [37], [38], [16], [39] 2. conditional ordering or C-ordering [34], [39], [40] 3. partial ordering or P-ordering [34], [41] 4. reduced (aggregated) ordering or R-ordering [34], [4], [16], [39]

2.5.1 Marginal Ordering In the marginal ordering (M-ordering) scheme, the multivariate samples are ordered along each of the p ; dimensions independently yielding:

x1(1) x1(2)   x1(n) x2(1) x2(2)   x2(n)   xp(1) xp(2)   xp(n)

(2.13) According to the M-ordering principle, ordering is performed in each channel of the multichannel signal independently. The vector x1 = [x1(1) ; x2(1) ; :::; xp(1) ]

60

2. Color Image Filtering

consists of the minimal elements in each dimension and the vector xn = [x1(n) ; x2(n) ; :::; xp(n) ] consists of the maximal elements in each dimension. The marginal median is de ned as x +1 = [x1( ) ; x2( ) ; :::; xp( ) ] for n =  + 1, which may not correspond to any of the original multivariable samples. In contrast, in the scalar case there is a one-to-one correspondence between the original samples xi and the order statistics x(i) . The probability distribution of p-variate marginal order statistics can be used to assist in the design and analysis of color image processing algorithms. Thus, the cumulative distribution function (cdf) and the probability distribution function (pdf) of marginal order statistics is described. In particular, the analysis is focused in the derivation of three-variate (three-dimensional) marginal order statistics, which is of interest since three-dimensional vectors are used to describe the color signals in the di erent color systems, such as the RGB. The three-dimensional space is divided into eight subspaces by a point (x1 ; x2 ; x3 ). The requested cdf is given as: Fr ;r ;r (x1 ; x2 ; x3 ) = 1

2

n X

3

n X n X

i1 =r1 i2 =r2 i3 =r3

P [i1ofX1i x1 ; i2 ofX2i x2 ; i3 ofX3i x3 ]

(2.14)

of the marginal order statistic X1(r ) ; X2(r ) ; X3(r ) when n three-variate samples are available [38]. Let ni , i = 0; 1; :::; 7 denote the number of data points belonging to each of the eight subspaces. In this case: P [i1 ; X1i x1 ; i2 ; X2i x2 ; i3 ; X3i x3 ] = 1

2

3

X X n! Y7 ni    Q7 Fi (x1 ; x2 ; x3 ) (2.15) i=0 ni ! i=0 n n P Given that the total number of points is 7 n = n, the following conditions 0

7

i=0 i

hold for the number of data points lying in the di erent subspaces: n0 + n2 + n4 + n6 = i1 n0 + n1 + n4 + n5 = i2 n0 + n1 + n2 + n3 = i3 (2.16) Thus, combining (2.14) and (2.15) the cdf for the three-variate case is given by [38]: Fr ;r ;r (x1 ; x2 ; x3 ) = 1

2

n X

3

n X n X X X

i1 =r1 i2 =r2 i3 =r3 n0



n23 ;1

n!

Q2 ;1 n ! 3

i=0

3 2Y ;1

i i=0

Fini (x1 ; x2 ; x3 )

(2.17)

2.5 Multivariate Data Ordering Schemes

61

which is subject to the constraints of (2.16). The probability density function is given by: 3 f (x ; x ; x ) = @ Fr ;r ;r (x1 ; x2 ; x3 ) (2.18) (r1 ;r2 ;r3 ) 1 2 3

1

2

3

@x1 @x2 @xx3

The joint cdf for the three-variate case can be calculated as follows [38]:

Fr ;r ;r s ;s ;s (x1 ; x2 ; x3 ; t1 ; t2 ; t3 ) = 1

2

3 1

2

3

j n X X 1

j1 =s1 i1 =r1



j n X X 3

j3 =s3 i3 =r3

(r) (2.19)

with

(r) = P [i1 ofX1i x1 ; j1 ofX1i t1 ; i2 ofX2i x2 ; j2 ofX2i t2 ; i3 ofX3i x3 ; j3 ofX3i t3 ] (2.20) for X ; i < ti and ri < si , i = 1; 2; 3. The two points (x1 ; x2 ; x3 ) and (t1 ; t2 ; t3 ) divide the three-dimensional space into 33 subspaces. If ni , Fi , i = 0; 1; :::; (33 ; 1) denote the number of data points and the probability masses in each subspace then it can be prove that [38], [16]:

(r) =

X n0



X

n!

(n );1 3 3

Q(n );1 n ! 3 3

i=0

3 (nY 3 );1

i i=0

Fini (x1 ; x2 ; x3 )

(2.21)

under the constraints: 3 3X ;1

i=0

X I0 =0

X

I2 =0

ni = n ni = i 1 ;

(2.22)

X I1 =0

ni = i2

ni = i 3

X

I0 =0;1

X

I2 =0;1

ni = j1 ; ni = j3

(2.23)

X I1 =0;1

ni = j2 (2.24)

where i = (I2 ; I1 ; I0 ) is an arithmetic representation of number i with base 3. Through (2.19)-(2.24) a numerically tractable way to calculate the joint cdf for the three-variate order statistics is possible.

62

2. Color Image Filtering

2.5.2 Conditional Ordering In conditional ordering (C-ordering) the multivariate samples are ordered conditional on one of the marginal sets of observations. Thus, one of the marginal components is ranked and the other components of each vector are listed according to the position of their ranked component. Assuming that the rst dimension is ranked, the ordered samples would be represented as follows:

x1(1) x1(2)   x1(n) x2[1]x2[2]  x2[n]   xp[1] xp[2]   xp[n] (2.25) where x1(i) , i = 1; 2; :::; n are the marginal order statistics of the rst dimension, and xj[i] , j = 2; 3; :::; p, i = 1; 2; :::; n are the quasi-ordered samples in

dimensions j = 2; 3; :::; p, conditional on the marginal ordering of the rst dimension. These components are not ordered, they are simply listed according to the ranked components. In the two dimensional case (p = 2) the statistics x2(i) , i = 1; 2; :::; n are called concomitants of the order statistics of x1 . The advantage of this ordering scheme is its simplicity since only one scalar ordering is required to de ne the order statistics of the vector sample. The disadvantage of the C-ordering principle is that since only information in one channel is used for ordering, it is assumed that all or at least most of the important ordering information is associated with that dimension. Needless to say that if this assumption were not to hold, considerable loss of useful information may occur. As an example, the problem of ranking color signals in the YIQ color system may be considered. A conditional ordering scheme based on the luminance channel (Y) means that chrominace information stored in the I and Q channels would be ignored in ordering. Any advantages that could be gained in identifying outliers or extreme values based on color information would therefore be lost.

2.5.3 Partial Ordering In partial (P-ordering), subsets of data are grouped together forming minimum convex hulls. The rst convex hull is formed such that the perimeter contains a minimum number of points and the resulting hull contains all other points in the given set. The points along this perimeter are denoted c-order group 1. These points form the most extreme group. The perimeter points are then discarded and the process repeats. The new perimeter points are denoted c-order group 2 and then removed in order for the process to be continued. Although convex hull or elliptical peeling can be used for outlier isolation, this method provides no ordering within the groups and thus it is

2.5 Multivariate Data Ordering Schemes

63

not easily expressed in analytical terms. In addition, the determination of the convex hull is conceptually and computationally dicult, especially with higher-dimensional data. Thus, although trimming in terms of ellipsoids of minimum content [41] rather than convex hull has been proposed, P-ordering is rather infeasible for implementation in color image processing.

2.5.4 Reduced Ordering In reduced (aggregating) or R-ordering , each multivariate observation xi is reduced to single, scalar value by means of some combination of the component sample values. The resulting scalar values are then amenable to univariate ordering. Thus, the set x1 ; x2 ; :::; xn can be ordered in terms of the values Ri = R(xi ), i = 1; 2; :::; n. The vector xi which yields the maximum value R(n) can be considered as an outlier, provided that its extremeness is obvious comparing to the assumed basic model. In contrast to M-ordering, the aim of R-ordering is to e ect some sort of overall ordering on the original multivariate samples, and by ordering in this way, the multivariate ranking is reduced to a simple ranking operation of a set of transformed values. The type of ordering cannot be interpreted in the same manner as the conventional scalar ordering as there are no absolute minimum or maximum vector samples. Given that multivariate ordering is based on a reduction function R(:), points which diverge from the `center' in opposite directions may be in the same order ranks. Furthermore, by utilizing a reduction function as the mean to accomplish multivariate ordering, useful information may be lost. Since distance measures have a natural mechanism for identi cation of outliers, the reduction function most frequently employed in R-ordering is the generalized (Mahalanobis) distance [29], [30]: R(x; x; ; ) = (x ; x) ; 1 (x ; x) (2.26) where x is a location parameter for the data set, or underlying distribution, in consideration and ; is a dispersion parameter with ; ;1 used to apply a di erential weighting to the components of the multivariate observation inversely related to the population variability. The parameters of the reduction function can be given arbitrary values, such as x = 0 and ; = I , or they can be assigned the true mean  and dispersion  settings. Depending on the state of knowledge about these values, their standard estimates: n X x = 1 x (2.27) and

n i=1 i

S = n ;1 1

n X i=1

(x ; x)(x ; x)

(2.28)

64

2. Color Image Filtering

can be used instead. Within the framework of the generalized distance, di erent reduction functions can be utilized in order to identify the contribution of an individual multivariate sample. A list of such functions include, among others, the following [42], [43]: qi2 = (x ; x) (x ; x) (2.29) t2i = (x ; x) S (x ; x) (2.30)  u2i = ((xx;;xx))S((xx;;xx)) (2.31) x) S ;1(x ; x) vi2 = (x(; (2.32) x ; x) (x ; x) d2i = (x ; x) S ;1 (x ; x) (2.33) d2k = (x ; xk ) S ;1 (x ; xk ) (2.34) with i < k = 1; 2; :::n. Each one of the these functions identi es the contribution of the individual multivariate sample to speci c e ects as follows [43]: 1. qi2 isolates data which excessively in ate the overall scale. 2. t2i determines which data has the greatest in uence on the orientation and scale of the rst few principal components [44], [45]. 3. u2i emphasizes more the orientation and less the scale of the principal components. 4. vi2 measures the relative contribution on the orientation of the last few principal components. 5. d2i uncovers the data points which lie far away from the general scatter of points. 6. d2k has the same objective as d2i but provides far more detail of interobject separation. The following comments should be made regarding the reduction functions discussed in this section: 1. If outliers are present in the data then x and  are not the best estimates of the location and dispersion for the data, since they will be a ected by the outliers. In the face of outliers, robust estimators of both the mean value and the covariance matrix should be utilized. A robust estimation of the matrix S is important because outliers in ate the sample covariance and thus may mask each other making outlier detection even in the presence of only a few outliers. Various design options can be considered. Among them the utilization of the marginal median (median evaluated using M-ordering) as a robust estimate of the location. However, care must be taken since the marginal median of n multivariate samples is

2.5 Multivariate Data Ordering Schemes

65

not necessarily one of the input samples. Depending on the estimator of the location used in the ordering procedure the following schemes can be distinguished [15]. a) R-ordering about the mean (Mean R-ordering) Given a set of n multivariate samples xi , i = 1; 2; :::n in a processing window and x the mean of the multivariates, the mean R-ordering is de ned as: (x(1) ; x(2) ; :::; x(n) : x) (2.35) where (x(1) ; x(2) ; :::; x(n) ) is the ordering de ned by: d2i = (x ; x) (x ; x) and (d2(1) d2(2) :::d2(n) ). b) R-ordering about the marginal median (Median R-ordering) Given a set of n multivariate samples xi , i = 1; 2; :::n in a processing window and xm the marginal median of the multivariates, the median R-ordering is de ned as: (x(1) ; x(2) ; :::; x(n) : xm ) (2.36) where (x(1) ; x(2) ; :::; x(n) ) is the ordering de ned by: d2i = (x ; xm ) (x ; xm ) and (d2(1) d2(2) :::d2(n) ). c) R-ordering about the center sample (Center R-ordering) Given a set of n multivariate samples xi , i = 1; 2; :::n in a processing window and xn the sample at the window center n , the center Rordering is de ned as: (x(1) ; x(2) ; :::; x(n) : xn ) (2.37) where (x(1) ; x(2) ; :::; x(n) ) is the ordering de ned by: d2i = (x ; xn ) (x ; xn ) and (d2(1) d2(2) :::d2(n) ). Thus, x(1) = xn . 2. Statistic measures, such as d2i and d2k are invariant under non singular transformation of the data. 3. Statistics which measure the in uence on the rst few principal components, such as t2i , u2i , d2i and d2k are useful in detecting those outliers which in ate the variance, covariance or correlation in the data. Statistic measures, such as vi2 will detect those outliers that add insigni cant dimensions and/or singularities to the data. Statistical descriptions of the descriptive measures listed above can be used to assist in the design and analysis of color image processing algorithms. As an example, the statistical description of the d2i descriptor will be presented. Given the multivariate data set (x1 ; x2 ; :::; xn ) and the population mean x, interest lies in determining the distribution for the distances d2i or equivalently for Di = d2i . Let the probability density function (pdf) of D for the input be denoted as fD and the pdf for the ith ranked distance be fD i . If the multivariate data samples are independent and identically distributed then D will be also independent and identically distributed (i.i.d). Based on this assumption fD i can be evaluated in terms of fD as follows [1], [39]. 1 2

( )

( )

66

2. Color Image Filtering

n! n;i i;1 fD i (x) = (i ; 1)!( n ; i)! FD (x)[1 ; FD (x)] fD (x)

(2.38)

( )

with FD (x) the cumulative distribution (cdf) for the distance D. As an example, assume that the multivariate samples x belong to a multivariate elliptical distribution with parameters x , x and of the form: ;1

f (x) = Kp jx j h((x ; x )  ;1 (x ; x )) (2.39) for some function h(:), where Kp is a normalizing constant and x is positive 2

de nite. This class of distributions includes the multivariate Gaussian distribution and all other densities whose contours of equal probability have an elliptical shape. If a distribution such as the multivariate Gaussian belonging to this class exists, then all its marginal distributions and its conditional distributions also belong to this class. For the special case of the simple Euclidean distance di = (x ; x) (x ; x) fD x has the general form of:

1 2

( )

p fD x = 2;K(pp) xp;1 h(x2 ) (2.40) 2 where ; (:) is the gamma function and x0. If the elliptical distribution 2

( )

assumed initially for the multivariate xi samples is considered to be multivariate Gaussian with mean value x and covariance x = 2 Ip , then the normalizing constant is Kp = (22 ) and the h(x2 ) = exp ( ;2x ), and thus fD x takes the form of the Rayleigh distribution: 1 2

2

2

( )

p;1 2 (2.41) fD x = p xp; p exp ( ;2x2 )  2 ;(2) Based on this distribution the kth moment of D is given as: k ; ( p+k ) E [Dk ] = (2) ; ( p2 ) (2.42) 2 with k0. It can easily be seen from the above equation that the expected value of the distance D will increase monotonically as a function of the parameter  in the assumed multivariate Gaussian distribution. To complete the analysis, the cumulative distribution function FD is ( )

2

2

2

needed. Although there is no closed form expression for the cdf of a Rayleigh random variable, for the special case where p is an even number, the requested cdf can be expressed as: p

);1 2 (X 2 k FD (x) = 1 ; exp ( ;2x2 ) ( k1! )( 2x2 ) 2

k=0

(2.43)

Using this expression the following pdf for the distance D(i) can be obtained:

2.6 A Practical Example 2 fD i (x) = Cxp;1 exp ( ;2x2 )FD (x)(i;1) (1 ; FD (x))n;i ( )

p

67

(2.44)

p

where C = (n!) ; ( )p; is a normalization constant. (i;1)!(n;i)!2 In summary, R-ordering is particularly useful in the task of multivariate outlier detection, since the reduction function can reliably identify outliers in multivariate data samples. Also, unlike M-ordering, it treats the data as vectors rather than breaking them up into scalar components. Furthermore, it gives all the components equal weight of importance, unlike C-ordering. Finally, R-ordering is superior to P-ordering in its simplicity and its ease of implementation, making it the sub ordering principle of choice for multivariate data analysis. 2

2

2

2.6 A Practical Example To better illustrate the e ect of the di erent ordering schemes discussed here, the order statistics for a sample set of data will be provided. For simplicity, two dimensional data vectors will be considered. In the example, seven vectors will be used. The data points are: 8 x = (1; 1) > 1 > x > 2 = (5; 3) > < x3 = (7; 2) Do : > x4 = (3; 3) (2.45) x = (5 ; 4) > 5 > ; 5) > : xx67 == (6 (6; 8) (I) Marginal ordering. For the case of M-ordering the rst and the second components are ordered independently as follows: [1; 5; 7; 3; 5; 6; 6])[1; 3; 5; 5; 6; 6; 7] (2.46) and [1; 3; 2; 3; 4; 5; 8])[1; 2; 3; 3; 4; 5; 8] (2.47) and thus, the ordered vectors are: 8 x = (1; 1) > (1) > x > (2) = (3; 2) > x < (3) = (5; 3) DM : > x(4) = (5; 3) (2.48) x = (6 ; 4) > (5) > x = (6; 5) > : x(6) (7) = (7; 8)

68

2. Color Image Filtering

with the median vector (5; 3) and the minimum /maximum vectors (1; 1) and (6; 8) respectively. (II) Conditional ordering. For the case of C-ordering the second channel will be used for ordering, with the second components ordered as follows: [1; 3; 2; 3; 4; 5; 8])[1; 2; 3; 3; 4; 5; 8] (2.49) and thus, the corresponding vectors ordered as: 8 x = (1; 1) > (1) > x > (2) = (7; 2) > < x(3) = (5; 3) DC : > x(4) = (3; 3) (2.50) x = (5 ; 4) > (5) > = (6; 5) > : xx(6) (7) = (6; 8) where the median vector is (3; 3) and the minimum / maximum de ned as (1; 1) and (6; 8) respectively. (III) Partial ordering. For the case of P-ordering the ordered sub groups for the data set examined here are: 8 C = [(1; 1); (6; 8); (7; 2)] < 1 DP : : C2 = [(6; 5); (5; 3); (3; 3)] (2.51) C3 = [(5; 4)] As it can be seen, there is no ordering within the groups and thus no way to distinguish a median or most central vector. The only information received is that C3 is the most central group with C1 the most extreme group. (IV) Reduced ordering. For the case of R-ordering, the following reduction function is used:

qi = ((x ; x) (x ; x)) (2.52) P where x = 71 7i=1 xi = (4:7; 3:7). Allowing the qi 's to be calculated as: 8 q = 4:58 for x = (1; 1) > 1 1 > q = 0 : 76 for x > 2 2 = (5; 3) > < q3 = 2:86 for x3 = (7; 2) qi : > q4 = 1:85 for x4 = (3; 3) (2.53) q = 0 : 42 for x = (5 ; 4) > 5 5 > for x6 = (6; 5) > : qq67 == 14::82 49 for x7 = (6; 8) 1 2

and thus, the ordered data set is as follows:

8 x = (5; 4) > (1) > x > (2) = (5; 3) > < x(3) = (6; 5) DR : > x(4) = (3; 3) x(5) = (7; 2) > > = (6; 8) > : xx(6) (7) = (1; 1)

2.7 Vector Ordering

69

(2.54)

with x(1) = (5; 4) the most centrally located point and x(7) = (1; 1) the most outlying data sample.

2.7 Vector Ordering The sub-ordering principles discussed here can be used to rank any kind of multivariate data. However, to de ne an ordering scheme which is attractive for color image processing, this should be geared towards the ordering of color image vectors. Such an ordering scheme should satisfy the following criteria: 1. The proposed ordering scheme should be useful from a robust estimation perspective, allowing for the extension of the operations of scalar order statistic lters to the color, multivariate domain. 2. The proposed ordering scheme should preserve the notion of varying levels of extremeness that was present in the scalar ordering case. 3. The proposed ordering scheme should take into consideration the type of multivariate data being used. Therefore, since the RGB coordinate system will be used throughout this work for color image ltering, the ordering scheme should give equal importance to the three primary color channels and should consider all the information contained in each of the three channels. Based on these three principles, the ordering scheme that will be utilized is a variation of the R-ordering scheme that employs a dissimilarity (or alternatively similar) measure to the set of xi . That is to say that the aggregate measure of point xi from all other points:

Ra (xi ) =

n X j =1

R(xi ; xj )

(2.55)

is used for ranking purposes. The scalar quantities Rai = Ra (xi ) are then ranked in order of magnitude and the associated vectors will be correspondingly ordered: Ra1 Ra2 :::Ran (2.56) x(1)x(1) :::x(n) (2.57)

70

2. Color Image Filtering

Using the ordering scheme proposed here, the ordered x(i) have a one-toone relationship with the original samples xi , unlike marginal ordering and furthermore all the components are given equal weight or importance unlike conditional ordering. The proposed ordering scheme focuses on inter relationships between the multivariate samples, since it computes similarity or distance between all pairs of data points in the sample set. The output of the ranking procedure depends critically on the type of data from which the computation is to be made, and the function R(xi ; xj ) selected to evaluate the similarity s(i; j ) or distance d(i; j ) between the two vectors xi and xj . In the rest of the chapter measures suitable for the task will be introduced and discussed.

2.8 The Distance Measures The most commonly used measure to the quantify distance between two p-D signals is the generalized Minkowski metric (Lp norm). It is de ned for two vectors xi and xj as follows [44]: p X

dM (i; j ) = (

k=1

p1 p j(xki ; xkj )j )

(2.58)

where p is the dimension of the vector xi and xki is the kth element of xi . Three special cases of the LM metric are of particular interest. Namely: 1. The City-Block distance (L1 norm) corresponding to M = 1. In this case, the distance between the two p-D vectors is considered to be the summation of the absolute values between their components:

d1 (i; j ) =

p X k=1

jjxki ; xkj jj

(2.59)

2. The Euclidean distance (L2 norm) corresponding to M = 2. In this model, the distance between the two p-D signals is set to be the square root of the summation of the square distances among their components: p X

d2 (i; j ) = (

k=1

(xki ; xkj )2 )

1 2

(2.60)

3. The Chess-board distance (L1 norm) corresponding to p = 1. In this case, the distance between the two p-D vectors is considered equal to the maximum distance among their components: d1 (i; j ) = max jjxki ; xkj jj (2.61) k

2.8 The Distance Measures

71

The Euclidean distance is relatively expensive, since it involves the evaluation of the squares of the componentwise distances and requires the calculation of the square root. To accommodate such operations, oating point arithmetic is required for the evaluation of the distance. On the other hand both the L1 and L1 norms can be evaluated using integer arithmetic resulting in computationally attractive distance evaluation algorithms. In addition, to alleviate the problem, fast approximations to the Euclidean distance recently have been proposed. These approximate distances use a linear combination of the absolute componentwise distances to approximate the L2 norm. The general form of the approximate Euclidean distance (L2a norm) is as follows [46], [47]:

d2 (i; j ) =

p X

k=1

ak jxki ; xkj j

(2.62)

with ak = (k) ; (k ; 1) , k = 1; 2; :::; p the weights in the approximation formula. For multichannel signals, with relatively small dimensions (p < 5), the computations are sped up further by rounding up to negative powers of 2, such that the weights can be determined as ak = 2p1; , so that the multiplications between the weights and the vector components can be implemented by bit shifting, which proves to be a very fast operation. The Minkowski metric discussed above is only one of many possible methods [44], [43]. Other measures can be devised in order to quantify distances among multichannel signals. Such a measure is the Canberra distance de ned as follows [43]: 1 2

1 2

1

dc (i; j ) =

p jxk ; xk j X i j i=1

jxki + xkj j

(2.63)

where p is the dimension of the vector xi and xki is the kth element of xi . The Canberra metric applies only to non-negative multivariate data which is the case when color vectors described in the RGB reference system are considered. Another distance measure applicable only to vectors with nonnegative components, such as color signals, is the Czekanowski coecient de ned as follows [43]: Pp min (x ; x ) ik jk dz (i; j ) = 1 ; 2 Pk=1 (2.64) p (x ; x ) ik jk k=1 If the variables under study are on very di erent scales or of di erent quantities, then it would make sense to standardize the data prior to applying any of these distance measures in order to ensure that no single variable will dominate the results.

72

2. Color Image Filtering

Of course, there are many other measures by which a distance function can be constructed. Depending on the nature of the problem and the constraints imposed by the design, one method may be more appropriate than the other. Furthermore, measures other than distance can be used to measure similarity between multivariate vector signals, as the next section will attest.

2.9 The Similarity Measures Distance metrics are not the only approach to the problem of de ning similarity between two multidimensional signals. Any non-parametric function s(xi ; xj ) can be used to compare the two multichannel signals xi and xj . This can be done by utilizing a symmetric function, whose value is large when xi and xj are similar. An example of such a function is the normalized inner product de ned as [44]:

x xt

s1 (xi ; xj ) = jx ijjxj j i

j

(2.65)

which corresponds to the cosine of the angle between the two vectors xi and xj . Therefore, the angle between the two vectors can be considered as a measure of their similarity. The cosine of the angle (or the magnitude of the angle) discussed here is used to quantify their similarity in orientation. Therefore, in applications where the orientation di erence between two vector signals is of importance, the normalized inner product or equivalently the angular distance ,

x xt

 = cos;1 ( jx ijjxj j ) i

j

(2.66)

can be used instead of the LM metric functions to quantify the dissimilarity between the two vectors. As an example, color images where the color signals appear as three-variate vectors in the RGB color space are considered. It was argued in [12] that similar colors have almost parallel orientations. On the other hand, signi cantly di erent colors point in di erent overall directions in the three-variate color space. Thus, the angular distance, which quanti es the orientation di erence between two color signals, is a meaningful measure of their similarity. It is obvious that a generalized similarity measure model which can effectively quantify di erences among multichannel signals should take into consideration both the magnitude and the orientation of each vector signal. The distance or similarity measures discussed thus far, utilize only part of the information carried by the vector signal. It is anticipated that a generalized measure based on both the magnitude and the orientation of the vectors will provide a robust solution to the problem of similarity between two vectors.

2.9 The Similarity Measures

73

To this end, a new similarity measure was introduced [48]. The proposed measure de nes similarity between two vectors xi and xj as follows:

x xt

jjxi j ; jxj jj ) s2 (xi ; xj ) = ( jx ijjxj j )(1 ; max (jx j; jx j)

(2.67)

s(xi ; xj ) = CT ij

(2.68)

i

j

i

j

As can be seen, this similarity measure takes into consideration both the direction and the magnitude of the vector inputs. The rst part of the measure is equivalent to the angular distance de ned previously and the second part is related to the normalized di erence in magnitude. Thus, if the two vectors under consideration have the same length, the second part of (2.67) becomes unity and only the directional information is used. On the other hand, if the vectors under consideration have the same direction in the vector space (collinear vectors) the rst part (orientation) is unity and the similarity measure of (2.67) is based only on the magnitude di erence. The proposed measure can be considered a member of the generalized `content model' family of measures, which can be used to de ne similarity between multidimensional signals [49]-[51]. The main idea behind the `content model' family of similarity measures is that similarity between two vectors is regarded as the degree of common content in relation to the total content of the two vectors [52]-[58]. Therefore, given the common quantity, commonality Cij , and the total quantity, totality Tij , the similarity between xi and xj is de ned as: ij

Based on the general framework of (2.68), di erent similarity measures can be obtained by utilizing di erent commonality and totality concepts. Given two input signals xi and xj , assume that the angle between them is  and their magnitudes are jxi j and jxj j respectively. As before, the magnitudes of the vectors represent the intensity and the angle between the vectors quanti es the orientation di erence between them. Based on these elements, commonality can be de ned as the sum of the projections of one vector over the other and totality as the sum of their magnitudes. Therefore, their similarity model can be written as: hj = jxi jcos() + jxi jcos() = cos() (2.69) s3 (xi ; xj ) = jxhij + jxi j + jxj j i + jxj j where hi = jxi jcos(). Although, content model in [55], [56] is equivalent to the normalized inner product (cosine of the angle) similarity model of (2.65), di erent similarity measures can be devised if commonality is de ned and/or totality between the two vectors di erently. Experimental studies have revealed that there is a systematic deviation between empirically measured similarity values and those obtained through the utilization of the model in [52], especially in applications where the magnitudes of the vectors are of

74

2. Color Image Filtering

importance. To compensate for the discrepancy, the totality Tij was rede ned as the vector sum of the two vectors under consideration. In such a case similarity was de ned as: hi + hj s4 (xi ; xj ) = (2.70) (jxi j2 + jxj j2 + 2jxi jjxj jcos()) In the special case of vectors with equal magnitudes, the similarity measure is solely based on the orientation di erences between the two vectors and it can be written as: s4 (xi ; xj ) = cos( ) (2.71) cos( 2 ) These are not the only similarity measure, which can be devised based on the content-model approach. For example, it is also possible to de ne commonality between two vectors as a vector algebraic sum, instead of a simple sum, of their projections. That gives a mathematical value of commonality lower than the one used in the models reported earlier. Using the two totality measures two new similarity measures can be compromised as: 1 2

or

2 2 s5 (xi ; xj ) = (jhi j + jhj jjx +j +2jhjxi jjjhj jcos()) i j

1 2

2 2 s5 (xi ; xj ) = cos()(jxi j + jjxxjjj ++jx2jjxi jjxj jcos())

i

j

(2.72) 1 2

(2.73)

If only the orientation similarity between the two vectors is of interest, assuming that jxi j = jxj j, the above similarity measure can be rewritten as:

s5 (xi ; xj ) = cos()cos( 2 )

(2.74)

If, on the other hand, the totality Tij is de ned as the algebraic sum of the original vectors and de ne commonality Cij as the algebraic sum of the corresponding projections, the resulting similarity measure can be expressed as: cos()p (2.75) s6 (xi ; xj ) = p (i;j) = cos() (i;j )

with

p(i;j) = (jxi j2 + jxj j2 + 2jxi jjxj jcos())

1 2

(2.76) which is the same expression obtained through the utilization of the inner product in (2.65).

2.9 The Similarity Measures

75

Other members of the content based family of similarity measures can be obtained by modifying either the commonality or the totality or both of them. The formula of (2.68) can be seen as a guideline for the construction of speci c models where the common part and the total part are speci ed. As a general observation, it can be claimed that when totality and commonality were derived according to the same principle, e.g. sum of vectors, the cosine of the angle between the two vectors can be used to quantify similarity. On the other hand, when commonality and totality were derived according to di erent principles, similarity was de ned as a function of both the angle between the vectors and their corresponding magnitudes. Content-based measures can also be used to de ne dissimilarity among vector signals. This is the approach taken in [57] where the emphasis is on what is uncommon to the two vectors instead of on what is common. In his dissimilarity model, the uncommon part to the vectors divided by the total part was assumed to be the measure of their dissimilarity. It was suggested in [57] that the part not in common is speci ed as the distance between the two vector termini with the totality de ned as the vector sum of the two vectors under consideration. Further, assuming that similarity and distance are complimentary, the following similarity measure were proposed: 2 2 s7 (xi ; xj ) = 1 ; (jxi j2 + jxj j2 ; 2jxi jjxj jcos()) (2.77) (jxi j + jxj j + 2jxi jjxj jcos()) where the numerator of the ratio represents the distance between the two vector termini, e.g. vector di erence, and the denominator is an indication of the totality. The di erent non-metric similarity measures described here can be used instead of the Minkowski type distance measures to quantify distance among a vector under consideration and the ideal prototype in our membership function mechanism, as discussed earlier. Although in the s7 (xi ; xj ) model it was assumed that distance and similarity are complimentary, judgments of di erences may be related to similarity in various ways [56], [59], [60]. The most commonly used approach is that suggested in [58] and used in [48], where di erence judgments are correlated negatively with similarity judgments. In most applications di erence judgments are often the inverse of similarity judgments and the choice between the two rests on practical considerations. It should be emphasized at this point, that a satisfactory approximation of the similarity (or di erence) mechanism with a static model, such as those considered here, can be obtained only when the comparison of vector signals is concentrated to a relatively small part of the p-variate space. That is to say that relatively high homogeneity is required [57]. Other forms of similarity can also be used to to rank multivariate, vectorlike signals. Assuming that two vector signals xi ; xj are available, their degree of similarity can be obtained by any of the following methods [61]: 1 2 1 2

76

2. Color Image Filtering

1. Correlation coecient P method. P De ning xi = p1 pk=1 xik and xj = p1 pk=1 xjk the correlation coecient between the two vectors is given as follows: Pp jx ; x jjx ; x j i jk j k=1 ik sij = Pp (2.78) P p 2 ( k=1 (xik ; xi ) ) ( k=1 (xik ; xi )2 ) 2. Exponential similarity method. p 2 X sij = p1 (exp ( ;43 ) (xik ; 2xjk ) ) (2.79) k k=1 1 2

1 2

with the parameter k > 0 a design parameter, the value of which is data determined. 3. The absolute-value exponent method. p X

sij = exp (; )(

k=1

jxik ; xjk j)

(2.80)

as before the parameter k > 0 a design parameter used to regulate the rate of similarity with its value determined by the designer. 4. The absolute-value reciprocal method. (1 if i = j sij = 1 ; Pp (2.81) if jxik ;xjk j i6=j k=1

where is selected so that 0sij 1. 5. Maximum-minimum method. Pp min (x ; x ) sij = Ppk=1 max (xik ; xjk ) (2.82) ik jk k=1 6. Arithmetic-mean minimum method. Pp min (x ; x ) ik jk k=1 (2.83) sij = 1 P p (x + x ) jk 2 k=1 ik 7. Geometric-mean minimum method. Pp min (x ; x ) ik jk k=1 sij = P (2.84) p (x x ) k=1 ik jk Of course there are many other methods by which a similarity or distance value between two vector signals can be constructed. Depending on the nature and the objective of the problem on hand, one method may be more appropriate than the other. The fundamental idea, however, is that through the reduction function, a multivariate space is mapped into a scalar space. Techniques other than distance or similarity measures can be utilized to assist with the mapping. One such technique is the space lling curves. Space 1 2

2.10 Filters Based on Marginal Ordering

77

lling curves can be de ned as a set of discrete curves that make it possible to cover all the points of a p-dimensional multivariate space. In particular, a space lling curve must pass through all the points of the space only once, and make it possible to realize a mapping of the p-dimensional space into a scalar interval, thus it allows for ranking multivariate data. That is to say, it is possible to associate with each point in the p-dimensional space a scalar value which is directly proportional to the length of the curve necessary to reach the point itself starting from the origin of the coordinates. Then, as for all vector ordering schemes, vector ranking can be based on sorting the scalar values associated with each vector. Through the utilization of the space lling curves it is possible to reduce the dimensionality of the space. A bi-dimensional space is considered here for demonstration purposes. A generic curve allows an association of a scalar value with a p-variate vector as follows:

(tk ) = (x1k (tk ); x2k (tk )) (2.85) with : Z!K , K Z 2 . A lling curve makes it possible to cover, as the parameter tk varies, all the points of the discrete space K , so that each point is crossed only once, xk 2K then exists (tk ) = x(tk ) and if tk ; tl2Z then tk 6=tl ! (tk )6= (tk ). In accordance with the above de nitions, a lling curve substantially makes a scanning operation of the K space and generates a list of vectors in which there is no repetition of the same elements of xk . The lling curve itself is invertible thus, if ( tk ) = xk then: 3 ;1 : K !Z : ;1(xk ) = tk (2.86) An important observation which derives from (2.86) is that, by means of parameter tk , it is possible to make a scalar indexing operation for each bidimensional vector and then to reduce the bi-dimensional space and use the set of transformed values for scalar ordering. To design a space lling curve able to be used for color image processing, it is necessary to extend the notion of space lling to the three channel RGB color space. The three-variate lling curve can be imagined as an expansion of successive increasing layers, ordered according to the maximum value of each three dimensional color vector. A possible implementation strategy is to impose that the three variate lling curve crosses all points at the same maximum value in a continuous way, e.g. by covering in an ordered way the three sides of a cube in the RGB color space [62], [63].

2.10 Filters Based on Marginal Ordering The use of marginal ordering (M-ordering) is the most straightforward multivariate approach to color image ltering based on data ordering. The three

78

2. Color Image Filtering

color image channels, in the RGB color space, are ordered independently. Several multichannel nonlinear lters that are based on marginal ordering can be proposed. The marginal median lter (MAMF) is the running marginal median operator y( +1) for n = 2 + 1. The marginal rank order lter is the running order statistic y(i) [38]. Based on similar concepts de ned for univariate (one-dimensional) order statistics, a number of nonlinear lters, such as the median, the -trimmed mean and the L- lter have been devised for color images by using marginal ordering. Theoretical analysis and experimental results had led to the conclusion that the marginal median lter is robust in the sense that it discards ( lters out) impulsive noise while preserving important signal features, such as edges. However, its performance in the suppression of additive white Gaussian noise, which is frequently encountered in image processing, is inferior to that of the moving average or other linear lters. Therefore, a good compromise between the marginal median and the moving average or mean lter is required. Such a lter is the -trimmed mean lter, which is the robust estimator for the normal (Gaussian) distribution. In gray scale images the -trimmed mean lter is implemented as a local area operation, where after ordering the univariate pixel values in the local window, the top % and the bottom % are rejected and the mean of the remaining pixels is taken as the output of the lter, thus achieving a compromise between the median and mean lters. Now, using the marginal ordering scheme as de ned previously, the trimmed mean lter for p-dimensional vector images has the following form [4], [65]:

2 1 Pn; n 3 i = n+1 y1(i) n (1 ; 2 ) 75 y n = 64    Pn; pn 1 1

1

n(1;2 p )

1

i= p n+1 yp(i)

(2.87)

The -trimmed mean lter, as de ned is 2.87, will reject 2 % of the outlying multivariate samples while still using (1;2 ) of the pixels to take the average. The trimming operation should cause the lter to have good performance in the presence of long tailed or impulsive noise and should help to preserve sharp edges, while averaging or mean operation should cause the lter to also perform well in the presence of short tailed noise, such as Gaussian. Trimming can also be obtained by rejecting data that lie far away from their marginal median value. The remaining data can be averaged to form the modi ed trimmed mean lter as follows: with

P y W r i+r y n = P W r



yi ; y(+1) ) ; ;1(yi ; y(+1) )d r = 10 (otherwise

(2.88) (2.89)

2.10 Filters Based on Marginal Ordering

79

where W is the lter window and ; is a matrix related to data dispersion. The -trimmed lter is a member of the family of marginal order statistic lters, also called L- lters [66], whose output is de ned as a linear combination of the order statistics of the input signal sequence. The design of an optimal L- lter for estimating a constant signal corrupted by additive white noise have been proposed in [66] and has been extended to the design of L- lters for multivariate signals based on marginal ordering (M-ordering). The following estimator will be called the p-variate marginal L- lter:

T (yL ) =

n n X X

i1 =1



ip =1

A(i ;i ;:::;ip) y(i ;i ;:::;ip) 1

2

1

2

(2.90)

where y(i ;i ;:::;ip) = [x1(i ; :::; xp(ip ) ] are the marginal order statistics and A(i ;i ;:::;ip) are pp matrices. The performance of the marginal L- lter depends on the choice of the matrices A(i ;i ;:::;ip) . The L- lter of (2.90) coincides with the p-variate marginal median for the following choice of matrices A(i ;i ;:::;ip): A(i ;i ;:::;ip) = 0 ij 6= + 1 Anu+1;:::;nu+1 = Ipp (2.91) Similarly, the marginal maximum y(n) , the marginal minimum y(1) and the moving average (mean) as well as the -trimmed mean lter are special cases of (2.90). The robustness of the L- lters in the presence of multivariate outliers can be found by using the p-variate in uence function [38], [37]. The in uence function is a tool used in robust estimation for qualitatively characterizing the behavior of a lter in the presence of outliers. It relates to the asymptotic bias caused by the contamination of the observations. As the name implies, the function measures the in uence of an outlier on the lter's output. To evaluate the in uence function in the p-variate case it is assumed that the vector lter is expressible as a functional T of the empirical distribution F of the data samples. When the sample size n is suciently large T (Fn ) converges in probability to an asymptotic functional T (F ) of the underlying distribution F . Then the in uence function IF (y; T ; F ) which measures the change of T caused by an additional observation at point y is calculated as follows [26], [27]: IF (y; T ; F ) = lim T [(1 ; t)F ; ty ] ; T [F ] (2.92) 1

1

2

1

2

1

1

2

2

1

2

t!0

t

where y is x1 x2 :::xp , a product of unit step functions at x1 ; x2 ; :::; xp respectively. Each component of the in uence function indicates the standardized change that occurs in the corresponding component of the lter when the assumed underlying distribution F is perturbed due to the presence of t

80

2. Color Image Filtering

outliers. If the change is bounded, the lter has good robustness properties and an outlier cannot destroy its performance. Therefore, the robustness of the lter can be measured in terms of its gross error sensitivity [38]:

 (T ; F ) = sup jjIF (y; T ; F )jj2 (2.93) x

where jj:jj2 denotes the Euclidean norm. It can be proved, under certain conditions, that the L- lter is asymptotically normal and its covariance matrix is given by:

Z

V (T ; F ) = IF (y; T ; F )IF (y; T ; F ) dF (y)

(2.94)

In cases, such as the one considered here, where the actual signal x is approximately constant in the lter's window, the performance of the lter is measured by the dispersion matrix of the output: D(T ) = E [(T (yL ) ; T )(T (yL ) ; T ) ] (2.95) where T = E [T (yL )]. The smaller the elements of the output dispersion matrix, the better the performance of the lter. The dispersion matrix is related asymptotically to the covariance matrix V (T ; F ) as follows [38]: D(T ) = 1 V (T ; F ) (2.96)

n

The coecients of the L- lter can be optimized for a speci c noise distribution with respect to the mean squared error between the lter output and the desired, noise-free color signal, provided that the latter is available to the designer and constant within the lter window. The structural constraints of unbiasness and location invariance can also be incorporated in the lter design. To this end, the mean square error (MSE) is used between the lter output y^ = T (yL ) and the constant, noise-free, multivariate signal x expressed in the following way:

 = E [(^y ; x) (^y ; x)] n X n X

= E[

i=1 j =1

y(i) Ai Aj y(j) ] ; 2x E [Ai y(i) ] + x x

(2.97)

After some manipulation, (2.97) becomes:

=

n X n X i=1 j =1

tr[Ai Rij Aj ] ; 2x

n X i=1

Ai i + x x

(2.98)

where Rij is the (pp) correlation matrix of the j th and ith order statistics Rij = E [yi yj ], i; j = 1; 2; :::; n and i , i = 1; 2; :::; n denotes the (p1) mean vector of the ith order statistic i = E [y(i) ].

2.11 Filters Based on Reduced Ordering

81

Let ai denote the (np1) vector that is made up by the ith row of matrices A1 ; :::; An . Also, the (np1) vector p is de ned in the following way: p = [1 ; 1 ; :::; p ] where j denote the mean vector of the order statistics in channel j , as well as the (npnp) matrix Rp

2R R  R 3 11 12 1p  R22    R2p 7 6 R Rp = 64  12          75 R

1p

R

2p

   Rpp

(2.99)

Using the previous notation, after some manipulation the MSE is given by:

=

p X i=1

a(i) Rpa(i) ; 2x [a(1) ; a(2); :::; a(p) ] + x x

(2.100)

The minimization of the MSE in (2.100) results in the following p sets of equations: Rp a(m) = xm p (2.101) with m = 1; 2; :::p, which yields the optimal p-variate L- lter coecients: a(1) = x1 Rp;1p

a(m) = xxm a(1) 1

(2.102)

where m = 2; :::p. That completes the derivation of the multivariate L- lters based on the marginal sub-ordering principle and the MSE delity criterion. In addition, the constrained minimization subject to the constraints of the unbiased and location-invariant estimation can be found in [66]. Simulation results reported in [38], [66] suggest that multivariate lters based on marginal data ordering are superior to simple moving average, marginal median and single channel L lters when applied to color images.

2.11 Filters Based on Reduced Ordering Reduced ordering (R-ordering) is another sub-ordering principle which has been extensively used in the development of multivariate color image lters. R-ordering orders p-variate, vector valued signals according to their distance from some reference vector. As a consequence, multivariate ordering is reduced to scalar ordering. Reduce ordering is rather easy to implement, it can provide cues about outliers and is the sub-ordering principle that is the

82

2. Color Image Filtering

most natural for vector valued observations, such as color image signals. It is obvious that the choice of an appropriate reference vector is crucial for the reduced ordering scheme. Depending on the reference vectors, di erent ranking schemes, such as the median R-ordering, the center R-ordering and the mean R-ordering, the marginal median, the center value or the window average have been used as the reference vector respectively. The choice of the appropriate reference vector depends on the design characteristics and is application dependent. Assuming that a suitable reference vector and an appropriate reduction function are available, the set of vectors W (n) can be ordered. It can be expected that any outliers be located at the upper extreme ranks of the sorted sequence. Therefore, an order statistics y(j) , j = 1; 2; :::; m with mn can be selected where it can be safely assumed that the color vectors are not outliers. For analysis purposes, the Euclidean distance will be used as a reduction function and that mean R-ordering, that is ordering around the mean value y of the samples in the processing window, is utilized. Then, let d(j) de ne the radius of a hyper-sphere centered around the sample mean value. The hypersphere de nes a region of con dence. If the sample yk lies within the hypersphere it can be assumed that this color vector is not an outlier and thus, it should not be altered by the lter operation. Otherwise, if yk is beyond the speci c volume, that is if L2(yk ; y ) = jjyk ; y jj2 = (yk ; y )) (yk ; y ) is greater than d(j) , then the window center value is replaced with the nearest vector signal contained in the set W  (n) = [(y(1) ; y(2) ; :::; y(j) ]. Therefore, the resulting reduced ordering RE lter can be de ned as follows [25]: 8y if L2 (yk ; y )dj < x = 1:75 t45  3:5  (3.58) > : x = 3:75 t > 45 and w(t) = u(t)v1 (t) + (I ; u(t))v2 (t) (3.59) where u(t) = u(t)I2x1 . Here u(t) is a random number uniformly distributed over the interval [0; 1] , v1 (t) is from a Gaussian distribution with zero mean and covariance 0:05I2x2 and v2 (t) is from a Gaussian distribution with zero mean and covariance 0:25I2x2 .

3.2 The Adaptive Fuzzy System

129

An operational window of size N = 5 was used in all the experiments reported here. The ltering results are shown below in Fig. (3.1)-(3.2). These gures depict the lter outputs for the rst and second component. 1st Component 4

3.5

Filter Comparison

3 − : FVDF 2.5 −−: VMF −.: AMF 2

1.5

1 40

42

44

46

48

50 Steps

52

54

56

58

60

56

58

60

Fig. 3.1. Simulation I: st

Filter outputs (1 component)

2nd Component 4

3.5

3 − : FVDF 2.5 −−: VMF −.: AMF 2

1.5

1 40

42

44

46

48

50 Steps

52

54

Fig. 3.2. Simulation I: nd

Filter outputs (2 component)

In order to evaluate the performance of the algorithms in the presence of mixed Gaussian and impulsive noise another simulation experiment was conducted. In this second experiment the actual signal is corrupted with mixed Gaussian and impulsive noise. The observed signal has the following form:

130

with

and

3. Adaptive Image Filters

y(t) = x + w(t)

(3.60)

8  1:5  > < x = 1:75 t45, 35t55, t > 75  3:5  > : x = 3:75 15leqt35, 55tleq75

(3.61)

w(t) = v1 (t) + v2 (t) (3.62) where v1 (t) is from a Gaussian distribution with zero mean and covariance 0:25I2x2 and v2 (t) is impulsive noise with equal number of positive and negative spikes of height 0:25 .

1st Component

Actual Signal

5 4 3 2 1 0

10

20

30

40

50 Steps

60

70

80

90

10

20

30

40

50 Steps

60

70

80

90

5 Noisy Input

4 3 2 1 0

Fig. 3.3. Simulation II: Actual signal and noisy input (1st component) Fig. (3.3) (i) denote the actual signal and (ii) the noisy input for the rst component. Curves in Fig. (3.5) depicts (i) the output of the fuzzy adaptive lter, (ii) the output of the median lter and (iii) the output of the mean lter for the rst vector component. Fig. (3.4) and Fig. (3.6) depict the corresponding signals for the second vector component with the same order. From the above simulation experiments the following conclusions can be drawn:

3.3 The Bayesian Parametric Approach

131

2nd Component

Actual Signal

5 4 3 2 1 0

10

20

30

40

50 Steps

60

70

80

90

10

20

30

40

50 Steps

60

70

80

90

5 Noisy Input

4 3 2 1 0

Fig. 3.4. Simulation II: Actual signal and noisy input (2nd component) 1. The vector median lter (VMF) works better near sharp edges. 2. The arithmetic mean (linear) lter works better for homogeneous signals with additive Gaussian-like noise. 3. The proposed adaptive lter can suppress the noise in homogeneous regions much better than the median lter and can preserve edges better than the simple averaging (arithmetic mean) lter.

3.3 The Bayesian Parametric Approach In addition to fuzzy designs, statistical concepts can be used to devise adaptive color lters. In this section, adaptive lters based on generalized noise models and the principle of minimum variance estimation are discussed. In all the adaptive schemes de ned in this section, a `loss function' which depends on the noiseless color vector and its ltered estimate is used to penalize errors during the ltering procedure [45]. It is natural to assume that if one penalizes estimation errors through a loss function then the optimum lter is that function of the measurements which minimizes the expected or average loss. In an additive noise scenario, the optimal estimator, which minimizes the average or expected quadratic loss, is de ned as [46]:

132

3. Adaptive Image Filters 1st Component

FVDF

4 2 0

10

20

30

40

VMF

50

60

70

80

90

Steps

4 3 2 1 0

10

20

30

40

50 Steps

60

70

80

90

100

AMF

4 2 0

10

20

30

40

50

60

70

80

90

Steps

Fig. 3.5. Simulation II: Filter outputs (1st component) 2nd Component

FVDF

4 2 0

10

20

30

40

50

60

70

80

90

50

60

70

80

90

50

60

70

80

90

Steps

VMF

4 2 0

10

20

30

40 Steps

AMF

4 2 0

10

20

30

40 Steps

Fig. 3.6. Simulation II: Filter outputs (2nd component)

E (xjy) = y^mv = or

Z1 ;1

3.3 The Bayesian Parametric Approach

xf (xjy) dx

(3.63)

R 1 xf (y; x) dx Z 1 xf (y; x) y^mv = dx = ;1 ;1 f (y)

with

f (y) =

Z1 ;1

133

f (y)

f (y; x) dx

(3.64) (3.65)

As in the case of order statistics based lters, a sliding window of size W (n) is assumed. By assuming that the actual image vectors remain constant within the lter window, determination of the x^mv at the window center corresponds to the problem of estimating the constant signal from n noisy observations present in the lter window [44]:

y^mv = E (xjY) =

Z1

;1

xf (xjY) dx

(3.66)

Central to the solution discussed above is the determination of the probability density function of the image vectors conditioned on the available noisy image data. If this a-posteriori density function is known, then the optimal estimate, for the performance criterion selected, can be determined. Unfortunately, in a realistic application scenario such a-priori knowledge about the process is usually not available. In our adaptive formulation, the requested probability density function is assumed to be of a known functional form but with a set of unknown parameters. This `parent' distribution provides a partial description where the full knowledge of the underlying phenomenon is achieved through the speci c values of the parameters. Given the additive nature of the noise, knowledge of the actual noise distribution is sucient for the parametric description of the image vectors conditioned on the observations. In image processing a certain family of noise models are often encountered. Thus, a symmetric `parent' distribution can be introduced, which includes the most commonly encountered noise distributions as special cases [47]. This distribution function can be characterized by a location parameter, a scale parameter and a third parameter  which measures the degree of non-normality of the distribution [49]. The multivariate generalized Gaussian function, which can be viewed as an extension of the scalar distribution introduced in [48], is de ned as:

f (mj; ;  ) = kM exp(;0:5 ( jm ; j )

2 1+



)

(3.67)

where M is the dimension of the measurement space,  , the variance, is an M M matrix which can be considered as diagonal with elements c with c = 1; 2; :::; M , while the rest of the parameters are de ned as =

134

3. Adaptive Image Filters

R

1 x;1 ;t :5(1+ ))  , k = ( (; (1:5(1+ ))) ;1 ( ;; (1 (0:5(1+ )) ) (1+ )(; (0:5(1+ ))) : ) with ; (x) = 0 t e dt and x > 0 . This is a two-sided symmetric density, which o ers great exibility. By altering the `shape' parameter  di erent members of the family can be derived. For example, a value of  = 0 results in the Gaussian distribution. If  = 1 the double exponential is obtained, and as  ! ; 1 the distribution tends to the rectangular. For ;1 1 intermediate symmetrical distributions can be obtained [47]. Based in this generalized `parent' distribution, an adaptive estimator can be devised utilizing Bayesian inference techniques. Assume, for example, that the image degradation process follows the additive noise model introduced in Chap. 2 and that the noise density function belongs to the generalized family of (3.67). Assuming that the shape parameter  and the location and scale parameters of this function are independent, f (x; ;  )/f (x; )f ( ) , the adaptively ltered result for a `quadratic loss function' is given as: R R R xf (Yjx; ;  )f (x; )f ( ) dx d d (3.68) E (xjY) = R R R f (Yjx; ;  )f (x; )f ( ) dx d d Z R R xf (Yjx; ;  )f (x; ) dx d f ( )f (Yj ) E (xjY) = ( R R f (Yjx; ;  )f (x; ) dx d )( f (Y) ) d (3.69) :

1 1+

0 5

0 5

Z

E (xjY) = (E (xjY;  )f ( jY)) d with

Z

E (xjY;  ) = xf (xjY;  ) dx

(3.70) (3.71)

The computational complexity of the adaptive lter depends on the information available about the shape parameter  . In applications such as image processing, where  is naturally discrete, the exact realization of the adaptive estimator can be obtained in a computationally ecient way. If the number of shape values is nite ( 1 ; :::;  ), then it is possible to obtain the overall adaptive ltered output by combining the conditional ltering results with the Bayesian learning of the unknown shape parameters. The form of the adaptive lter therefore becomes that of a weighted sum:

E (xjY) =

 X

=1

E (xjY;  )f ( jY)

(3.72)

In cases where a continuous parameter space for the shape parameter is assumed, density function can be quantized using the form f ( ) = P f (the)(a-priori  ;  )  to obtain discrete values. Using the quantized values of =1  the shape parameter, the approximate adaptive algorithm takes the form of (3.72).

3.3 The Bayesian Parametric Approach

135

Assume that for a given image location, a window W consisting of n noisy image vectors is available. Assume further that based on these Y = W (n) measurements, intermediate estimates, conditioned on various  , are available. For example, conditioned on  = 0 the mean value of the

Y measurements can be considered as the best estimate of the location. Alternatively, if  = 1 the median value of the Y set is essentially accepted

as the best estimator. In such a scenario, the main objective of the adaptive procedure is the calculation of the posterior densities which arise for di erent shape parameters. Assuming a uniform reference prior in the range of ;1 <   1 for f ( ) the conditional densities are calculated through the following rule: f ( jY) = Pf (yn j ; Yn;1 )f ( jYn;1 ) (3.73) =1 f (yn j ; Yn;1 )f ( jYn;1 ) with  f (y j ; Y ) = kM exp(;0:5( y ; x^  ) ) (3.74) n 

n;1

2 1+



where Y = (y1 ; y2 ; :::; yn;1 ; yn ) , Yn;1 = (y1 ; y2 ; :::; yn;1 ) are the observations obtained from the window and x^  is the conditional ltered result for the image vector at the window center using a speci c value of the shape parameter  =  . The above result was obtained using Baye's rule: f ( ; yn ; Yn;1 ) f ( jY) = f (f(Y; Y) ) = f (Y jYn;1 )f (Yn;1 ) = Pf ( ; yn jYn;1 ) (3.75) =1 f ( ; yn jYn;1 ) Further application of Baye's rule results with:  ; Yn;1 ) = f (yn j ; Yn;1 )f ( ; Yn;1 ) (3.76) f ( ; yn jYn;1 ) = f (yfn(; Y f (Yn;1 ) n; 1 ) or f ( ; yn jYn;1 ) = f (yn j ; Yn;1 )( f (f(Y; Yn;)1 ) ) n;1 f ( ; yn jYn;1 ) = f (yn j ; Yn;1 )f ( jYn;1 ) (3.77) To complete the adaptive determination of the a-posteriori density f ( jY) in (3.76) the predictive density f (yn j ; Yn;1 ) must be computed. Due to the additive nature of the noise: f (yn j ; Yn;1 ) = f (yn j ; x) = fnjx(yn ; xj ) = fn (yn ; xj )(3.78)

136

3. Adaptive Image Filters

where fnjx(:) denotes the conditional pdf of n given x and fnjx(:) = fn (:) when n and x are independent. Thus, the density f (yn j ; Yn;1 ) can be considered to be generalized Gaussian with shape parameter  and location estimate the conditional lter output. The Bayesian inference procedure described above allows for the selection of the appropriate density from the family of densities considered. If the densities corresponding to the di erent shape values assumed are representative of the class of densities encountered in image processing applications, then the Bayesian procedure should provide good results regardless of the underlying density, resulting in a robust adaptive estimation procedure. The adaptive lter described in this section can be viewed as a linear combination of speci ed, elemental ltered values. The weights in the adaptive design are nonlinear functions of the di erence between the measurement vector and the elemental ltered values determined by conditioning on various  . In this context, the Bayesian adaptive lter can be viewed as a generalization of radial basis neural networks [50] or fuzzy basis functions networks [51]. If it is desired, the minimum mean square error of the unknown scalar shape parameter can be determined as:

X E ( jY) = ^mmse (Y) = f ( jY) 

=1

(3.79)

with the error in the shape parameter estimation calculated through:

X E (( ; ^mmse (Y))2 jY) = ( ; ^mmse (Y))2 f ( jY) 

=1

(3.80)

In a similar manner, the maximum a-posteriori likelihood estimate of the shape parameter ^map (Y) = ^ (Y) can be obtained through the adaptive lter. The following comments can be made regarding the adaptive lter: 1. The adaptive lter of (3.72) is optimum in the Baye's sense every time it is used inside the window and its optimality is independent of the convergence. The weights that regulate the contribution of the elemental lters are not derived heuristically. Rather, the weights are determined through Baye's theorem using the assumptions on the noise density functions. The adaptive lter weights are dependent on the local image information and thus, as the lter window moves from one pixel to the next, a di erent adaptive lter output is obtained. 2. Through the adaptive design, the problem of determining the appropriate distribution for the noise is transformed into the problem of combining a collection of admissible distributions. This constitutes a problem of considerably reduced complexity since speci c noise models, such as additive Gaussian noise, impulsive noise or a combination of both, are often encountered in image processing applications.

3.4 The Non-parametric Approach

137

3. This adaptive design is also a scalable one. The designer controls the complexity of the procedure by determining the number and form of the individual lters. Depending on the problem speci cation and the computational constraints imposed by the design, an appropriate number of elemental lters can be selected. The lter requires no-prior training signals or test statistics and its parallel structure makes it suitable for real-time image applications. The adaptive procedure is simple, computationally ecient, easy-to-use and reasonably robust. In the approach presented, the posterior probabilities are more important than the manner in which the designer can obtain the elemental estimates which are used in the procedure. Di erent methodologies can be utilized to obtain these estimates. Filters derived using the maximum likelihood principle, (e.g. VMF, BVDF), robust estimators, (e.g. -mean lter) and estimators based on adaptive designs, such as the di erent fuzzy lters discussed in this chapter, can all be used to provide these needed elemental estimates. From the large number of lters which can be designed using the adaptive procedure, a lter of great practical importance is the one which combines a vector median lter (VMF) with an arithmetic (linear) mean lter (AMF). Extensive experimentation in the past has proven that in the homogeneous regions of the image a mean lter is probably the most suitable estimator, whereas in areas where edges or ne details are present, a median lter is preferable. Through the adaptive design in (3.76), these two lters can be combined. By using local image information in the current processing window, such as an adaptive lter, which is called BFMA, can switch between the two elemental lters in a data dependent manner, o ering enhanced ltering performance.

3.4 The Non-parametric Approach The adaptive formulation presented in the previous section was based on the assumption that a certain class of densities can be used to describe the noise corrupting color images. Thus, a Bayesian adaptive procedure has been utilized to determine on-line the unknown parameters which are used to describe the noise density function. However, in a more general formulation, the functional form of the noise density may also be unknown. In such a case, the densities involved in the derivation of the optimal estimator of (3.64) cannot be determined through a parametric technique such as the one described in the previous section. Rather, they have to be estimated from available sample observations using a non-parametric technique. Among the plethora of the di erent non-parametric schemes, the kernel estimator will be adopted here [52]. The notation of non-parametric estimation remains relatively unknown, therefore a brief overview is needed.

138

3. Adaptive Image Filters

If the objective is the non-parametric determination of an unknown multivariate density f (z) from a set of independent samples Z = z1 ; z2 ; :::; zn drawn from the unknown underlying density, the form of a data adaptive non-parametric kernel estimator is: n X

f^(z) = (n;1 )

l=1

(hl );p K ( z ;h zl ) l

(3.81)

where zl 2 Rp , p is the dimensionality of the measurement space ( p = 3 for color images), K : Rp 7!R1 is a function centered at 0 that integrates to 1 and hl is the smoothing term [53]-[56]. The form for the data-dependent smoothing parameter is of great importance for the non-parametric estimator. To this end, a new smoothing factor suitable for multichannel estimation is presented here. For the sample point de ned in (3.81) a smoothing factor which is a function of the aggregate distance between the local observation under consideration and all the other vectors inside the Z set is de ned, excluding the point at which the density is evaluated. The smoothing parameter is therefore given by: ;k

X

;k n

hl = n p Al = n p (

j =1

jzj ; zl j)

(3.82)

where zj 6=zl for 8zj , j = 1; 2; :::; n , jzj ; zl j is the absolute distance ( L1 metric) between the two vectors and k is a parameter to be determined. The resulting variable kernel estimator exhibits local smoothing which depends both on the point at which the density estimate is taken and information local to each sample observation in the Z set. In addition to the smoothing parameter discussed above, the form of the kernel selected also a ects the result. Usually, positive kernels are selected for the density approximation. The most common choices are kernels from symmetric distribution functions, such as the Gaussian or the double exponential. For the simulation studies reported in this section, the multivariate exponential kernel K (z) = exp(;jzj) and the multivariate Gaussian kernel K (z) = exp(;0:5z z) were selected [55]. As for any estimator, the behavior of the non-parametric estimator of (3.81) is determined through the study of its statistical properties. Certain restrictions should apply to the design parameters, such as the smoothing factor, in order to obtain an asymptotically unbiased and consistent estimator. According to the analysis introduced in [55], if the conditions (limn!1 (nh2l p (n)) = 1 (asymptotic consistency), (limn!1 (nhpl (n)) = 1 (uniform consistency), and (limn!1 (hpl (n)) = 0 (asymptotic unbiasedness) are satis ed then f^(z) becomes an asymptotically unbiased and consistent ;k estimate of f (z) . The multiplier n p in (3.82) with (0:5 > k > 0) guarantees the satisfaction of the conditions for an asymptotically unbiased and

3.4 The Non-parametric Approach

139

consistent estimator [55]. The selection of the Al for the same design parameter does not a ect the asymptotic properties of the estimator in (3.81). However, for a nite number of samples, as in our case, the function Al is the dominant parameter which determines the performance of the non-parametric estimator. After this brief introduction to the problem of non-parametric evaluation of the densities involved in the derivation of the optimal estimator in (3.72) will be considered. This time, no assumption regarding the functional form of the noise present in the image is made. It is only assumed that n pairs of image vectors (xl ; yl ) , l = 1; 2; :::; n are available through a sliding window of length n centered around the noisy observation y . Based on this sample, the densities f (y) and f (y; x) will be approximated using sample point adaptive non-parametric kernel estimators. The rst task is to approximate the joint density f (y; x) . As a nonparametric density approximation the following may be chosen: n X f^(x; y) = n;1 (h )p (h );p K (( (x ; xl ) ); ( (y ; yl ) )) (3.83) lx

l=1

ly

hlx

hly

Assuming a product kernel estimator [57], the non-parametric approximation of the joint density f (y; x) has is follows:

f^(x; y) = n;1

n X l=1

(hlx );p (hly );p K ( (x h; xl ) )K ( (y h; yl ) ) lx

ly

(3.84)

The marginal density f (y) in the denominator of (3.65) can then be approximated using the results in (3.83) as follows: Z f^(y; x) dx

Z

X = n;1 (hly );p K ( (y ; yl ) )( (hlx );p K ( (x ; xl ) ) dx) n

Z R

l=1

f^(y; x) dx = n;1

n X l=1

hly

hlx

(hly );p K ( (y h; yl ) ) ly

(3.85) (3.86)

since K (z) dz = 1 assuming that the kernel results from a real density. determination of the numerator is now feasible. The assumption that R z1The 1 1 :::zp K (z) dz = 0 implies that [57]:

Z

xK (x ; xl ) dx = xl

Thus, the numerator of (3.67) becomes:

(3.87)

140

3. Adaptive Image Filters

Z

X xf^(y; x) dx = n;1 xl (hly );pK ( (y h; yl ) ) n

ly

l=1

(3.88)

Utilizing (3.82)-(3.88) the sample point adaptive non-parametric estimator of y^ can be de ned as: R 1 xf^(x; y) dx Z 1 xf^(x; y) y^np = dx = R;1 1 ;1 f^(y) ;1 f^(x; y) dx

Pn x ((n;1 )h;pK ( y;yl )) l l hl = Pl=1 n ((n;1 )h;p K ( y;yl )) l

hl

n X h;p K ( y;yl ) xl ( Pn l ;p hly;yl ) = xl wl (y) (3.89) l=1 hl K ( hl ) l=1 l=1 where yl 2 W and wl (y) is a weighting function de ned in the interval [0,1].

=

n X

l=1

From (3.89), it can be seen that the non-parametric estimator, often called the Nadaraya-Watson estimator [58], [59], is given as a weighted average of the samples in the window selected. The inputs in the mixture are the noise-free vectors xl . This estimator is linear with respect to the xl and can therefore be considered as a linear smoother. The basis functions on the other hand, determined by the kernel function K (:) and the smoothing parameter h(:) , can be either linear or nonlinear on the noisy measurements yl . It is easy to recognize the similarity between the Bayesian adaptive parametric lter discussed in this chapter with the Nadaraya-Watson estimator. The Bayesian adaptive lter is also linearly smoother with respect to a function of the xl (the elemental ltered results) and utilizes nonlinear basis functions which are determined by the unknown `shape' ( ) parameter from the generalized `parent' distribution assumed. Although the existence of a consistent estimate in mean square has been proven, there are no a-priori guidelines on the selection of design parameters, such as the smoothing vector or the kernel, on the basis of a nite set of data. Smoothing factors, other than the aggregated distance introduced here, such as the minimum distance or the maximum distance between the yl and the kth nearest neighbors, constitute valid solutions and can be used instead [60]. The adjustable parameters of the proposed lter are x , y , K (:) and h(:) . The degree of the smoothness is controlled by the smoothing factor h(:) . It can easily be seen that by appropriately modifying the smoothing factor the non-parametric estimator can be forced to match a given sample arbitrarily close. To accomplish this the kernel is modi ed by adjusting, through an exponent, the e ect of the smoothing parameter h(:) . In this case, the form of the estimator is as follows: n X h;p K ( y;yl ) y^ = xl ( Pn l h;pKh(lry;yl ) ) (3.90) l=1 l hl r l=1

3.4 The Non-parametric Approach

141

where, the parameter r regulates the smoothness of the kernel. Since the non-parametric lter is a regression estimator which provides a smooth interpolation among the observed vectors inside the processing window, the r parameter can provide the required balance between smoothing and detail preservation. Because r is a one-dimensional parameter, it is usually not dicult to determine an appropriate value for a practical application. By increasing the value of the r the non-parametric estimator can be forced to approximate arbitrarily close any one of the vectors inside the ltering window. To this end, suppose that a non-parametric estimator with given value r = r exists, given the available input set Y . Then the following relation holds: xj + Pnl=1l6 j xl (h;l pK ( yh;lyr l )) y^ = 1 + Pn (h;pK ( y;yl )) (3.91) l=1l6 j l hl r with l = 1; 2; :::; n . Assuming that xj 6= xl for j 6=l . Then for arbitrary  > 0 and any l; j with l = 1; 2; :::; n and j 6=l you can force K ( yh;l ryl ) <  since by properly choosing a value r = r1v the kernel K ( yh;l ryl )7!0 as rv 7!0 if y6=yl . Thus, it can be concluded that there exists some value of r such that the non-parametric regressor approaches arbitrarily close to an existing vector. To obtain the nal estimate it is assumed that, in the absence of noise, the actual image vectors xl are available. As is the case for the adaptive/trainable lters, a training record can be obtained in some cases during some calibration procedure in a controlled environment. In a real time image processing application however, that is not always possible. Therefore, alternative suboptimal solutions are introduced. In a rst approach, each vectors xl in (3.89) is replaced with its noisy measurement yl . The resulting suboptimal estimator, called adaptive multichannel non-parametric lter (hereafter AMNF), is solely based on the available noisy vectors and the form of the data-adaptive kernel selected for the density approximations. Thus, the AMNF form is as follows: n X h;p K ( y;yl ) y^AMNF = yl ( Pn l h;pKh(ly;yl ) (3.92) l=1 l hl l=1 A di erent form of the adaptive non-parametric estimator can be obtained if a reference vector xlr is used instead of the actual color vector xl in (3.92). A robust estimate of the location, usually evaluated in a smaller subset of the input set, is utilized instead of the xl . Usually the median is the preferable choice since it smoothes out impulsive noise and preserves edges and details. However, unlike scalars, the most central vector in a set of vectors can be de ned in more than one way. Thus, the vector median lter (VMF) or the marginal median lter (MAMF) operating in a (33) window centered around the current pixel can be used to provide the requested reliable reference. In this paper, the VMF evaluated in a (33) window was selected to =

=

142

3. Adaptive Image Filters

provide the reference vector. The new adaptive multichannel non-parametric lter, (hereafter AMNF2), has the following form: n X h;p K ( y;yl ) y^AMNF = xVl M ( Pn l h;pKh(ly;yl ) ) (3.93) l=1 l hl l=1 The AMNF2 can be viewed as a double-window two stage estimator. First the original image is ltered by a multichannel median lter in a small processing window in order to reject possible outliers and then the adaptive lter of (3.93) is utilized to provide the nal ltered output. The AMNF2 lter can be viewed as an extension to the multichannel case of the double-window (DW) ltering structures extensively used for gray scale image processing. As in gray scale processing, with this adaptive lter, the user can distinguish between two operators: (i) the computation of the median in a smaller window; and (ii) the adaptive averaging in a second processing window. A kernel estimator designed speci cally for directional data, such as color vectors, can be devised based on the properties of the color samples on the sphere [63]. When dealing with directional data, a kernel other than the exponential (Gaussian-like) often used in non-parametric density approximation should be utilized. In [62] the following kernel is recommended for the evaluation of the density at point Y given a set of n available data points y1 ; y2 ; ::::; yn:

Cn exp(C (Y y )) Kn (Y) = 4sinh n i (Cn )

(3.94)

Cn where A(Cn ) = 4sinh (Cn ) normalizes the kernel to a probability density, Cn is the reciprocal of hl used in the de nition of the data-adaptive nonparametric estimator used in [11], and Y and yi are the Cartesian representations of the color vectors with the term (Y yi ) denoting the inner product between the two vectors. Alternatively, the term 1 ; (Y yi ) which is the cosine of the distance between the two vectors (Y; yi ) along the surface of the sphere can be used. Comparing the adaptive kernel estimator of (3.81) with the proposed [62] kernel estimator of (3.94), it can be seen that the rst is based on the Euclidean distance among the available data points, with the latter based on the inner product (the cosine of the distance) between the data points available. In [64] a simple and computationally attractive alternative to the nonparametric directional estimator was proposed. The proposed non-parametric directional density estimator is of the following form: (3.95) Kn (Y) = n1 ( A1 cos2m ( a2l )) m where Am is a normalization constant with m a user de ned smoothing factor approaching in nity to be determined as a function of the data record n and a2l denotes the angle between the point Y and the vector

3.4 The Non-parametric Approach

143

with spherical coordinates (0; 0). The normalization factor Am is given to be R Am = (cos2m ( a2l )) dx so that A1m cos2m ( a2l ) integrates to 1 over the sphere S 2 . Using direct evaluation of the integral and the Stirling's formula the ap was proposed in [64]. It can be proximate normalized factor of Am = m4+1 seen by inspection that A(Cn ) is independent of the coordinate system since it is only a function of the angle between two vectors (Y; yi ). This new vector directional kernel estimator is then utilized to assist in the development of an adaptive multivariate non-parametric lter (hereafter AMNFD) which can provide reliable ltered estimates without any prior knowledge regarding signal or noise characteristics. Given the form of the optimal minimum variance estimator and the adaptive non-parametric kernel, the resulting non-parametric lter can be de ned as: n x (n;1 )( 1 )cos2m ( al ) X y^np = Pnl (n;1)(Am1 )cos2m2( al ) (3.96) Am 2 l=1 l=1 where a2l denotes the angle between the point yl and the vector with spherical coordinates (0; 0). If it is not possible to access the noise free color vectors x , the noisy measurements y can be used instead. The resulting lter is solely based on the available noisy vectors and the form of the minimum variance estimator. n y (n;1 )( 1 )cos2m ( al ) X y^np = Pnl (n;1)(Am1 )cos2m2( al ) (3.97) Am 2 l=1 l=1 In the derivation of the adaptive non-parametric estimators presented in (3.91), (3.92), (3.93) and (3.97) a number of design parameters have been introduced. Namely:  the window size, and therefore, the number of noisy measurement vectors available for the evaluation of the approximate density,  the form of the smoothing factor hl where decisions about the multiplier and the distance measure utilized can greatly a ect the performance of the density estimator,  the type of kernel used in (3.89),  the vectors used instead of the actual, unavailable color vectors xl in the derivation of (3.89). All of these elements a ect the ltering process since they determine the output of the estimator. In an adaptive formulation,  can be de ned as the parameter vector, which is the abstract representation of all elements listed above. It is not necessary that all of these elements be treated as parameters in a speci c design. Problem speci cations and design objectives can be used to determine the elements included in the parameter vector  . By varying the di erent parameters on the design of the non-parametric kernel, di erent results x^  (Y) = m^ (x) can be obtained. Suppose that

144

3. Adaptive Image Filters

m^ (xi ) , i = 1; 2; :::; P are di erent non-parametric estimators, all based on the same sample record Y = (y1 ; y2 ; :::; yn ) but possibly with di erent ker-

nels K1 ; K2 ; :::; Kp and di erent smoothing factors h1 ; h2 ; :::; hp . An overall estimator based on these values can be obtained as the expected value x^np = E (m^ (x)jy) calculated over the given non-parametric values determined by the di erent techniques. Assuming that the di erent estimated values m^ (x) are available and that they are related to the observed sample through the model y = m^ (x) +  (3.98) with  additive corruption noise, it can be claimed that the minimization of the expected error leads to a solution for y^np as:

y^np = with

PP m^ (y)f (y ; m^ (y)) X P i  i i=1 = m^ i(y)wnpi PP f (y ; m^ (y)) i=1 

i=1

^ i (y)) wnpi = PPf (y ; m ^ j (y)) j =1 f (y ; m

(3.99) (3.100)

To calculate the exact value of the multiple non-parametric estimator, the function f (:) must be evaluated. Since it is generally unknown, it is approximated in a non-parametric fashion based on the set of the elemental ^ (y) available. If P elemental estimates m ^ i(y) are available, with values m ^ i (y) is introduced. Therei = 1; 2; :::; P the nominal parameter i = y ; m fore, our objective is the non-parametric evaluation of the density f (:) using the set of the available data points  = 1 ; 2 ; :::; P . The approximation task can be carried out by using any standard non-parametric approach, such as the di erent kernel estimators discussed in (3.90). For the simulation studies discussed in Sect. 3.6, the sample point adaptive kernel estimator of ^ i(y)) is (3.82)is used. Thus, the following estimate of the density f (y ; m used:

X ^ i (y)) = f^ ( ) = (P;1 ) (hl );p K (  ; l ) f^ (y ; m P

l=1

hl

(3.101)

with the smoothing parameter calculated as: ;k

;k

P X

hl = P p Al = P p (

j =1

jj ; l j)

(3.102)

where j 6= l for 8j j = 1; 2; :::; P , and jj ; l j is the absolute distance ( L1 metric) between the two vectors. From (3.101) it can be claimed that f^ ( ) integrates to 1, given the form of the approximation and the fact that the kernel K (:) results from a real density. Thus, the set of weights wnpi has the following properties:

3.4 The Non-parametric Approach

145

1. Each weight is a positive number, wnpi  0 , P  w = 1. 2. The summation of all the weights is equal to one, Pj=1 npi These properties can be interpreted as posterior probabilities used to incorporate prior information concerning local smoothness. Thus, each weight in (3.99) regulates the contribution of the associated lter by its posterior component density. The following comments can be made regarding the multiple non- parametric lter:  The general form of the lter is given as a linear combination of nonlinear basis functions. The weights in the above mixture are the elemental ltered values obtained by the di erent non-parametric estimators applied to the problem. The non-linear basis function is determined by the form of the approximate density f^ and can take many di erent forms, such as Gaussian, exponential or triangular. It is not hard to see that in the case of a Gaussian kernel the multiple estimator of (3.99) can be viewed as a radial basis function (RBF) neural network.  The adaptive procedure can be used to combine a variety of non-parametric estimators each one of them developed for a di erent value set of the parameter vector  . For example, such a structure can be used to combine elemental non-parametric lters derived for di erent window sizes W . The number of color vector samples utilized in the development of the nonparametric estimator depends on the window W centered around the pixel under consideration. Usually a square (33) or (55) window is selected. However, such a decision a ects the lter's performance. In smooth areas or when Gaussian noise is anticipated, a larger window (e.g. (55) ) is preferable. On the other hand, near edges or when impulsive noise is assumed a smaller window (usually (33) window) is more appropriate. An adaptive lter which utilizes elemental lters with di erent window sizes, hereafter MAMNF35, is probably a better choice in an unknown or mixed noise environment. Using the same approach other practical adaptive lters, such as the MAMNFEG which utilizes two elemental non-parametric lters with an exponential and a Gaussian kernel respectively, can be devised. Due to the speci c form of the kernel, it is anticipated that a non-parametric lter with a Gaussian kernel is probably a better choice for Gaussian noise smoothing. Similarly, a lter with an exponential kernel will provide better ltering results when impulsive or long tailed noise is present. An adaptive design which allows for both lters to be utilized simultaneously is of paramount importance in an unknown noise environment. Such examples emphasize the versatility of the proposed adaptive approach which can provide a wide range of di erent practical lters.  Although the lter in (3.99) has been derived as a generalization of a nonparametric estimator it can be used to combine di erent heterogeneous estimators applied to the same task. Speci cally, the designer can utilize a number of di erent elemental lters, such as order statistics based lters,

146

3. Adaptive Image Filters

the Bayesian adaptive lter, nearest neighbor lters and non-parametric estimators and then combine all the di erent results using (3.99)-(3.102). The e ectiveness of the adaptive scheme is determined by the validity of the elemental ltered results and the approximation required in (3.100). However, due to the di erent justi cation of the elemental lters, extensive simulation results are the only way to examine the performance of the lter in practical applications. Experimentation with color images will be used to demonstrate the e ectiveness of the multiple lter and to access the improvement in terms of the performance achieved using a multiple nonparametric lter vis. a vis. a simple non-parametric lter. The multiple lter can be a powerful design tool since it allows the combination of lters designed using di erent methodologies and di erent design parameters. This is of paramount importance in practical applications since it allows for the development of ecient adaptive lters when no indication for the selection of a suitable ltering approach is available.

3.5 Adaptive Morphological Filters 3.5.1 Introduction In recent years, a great deal of work has been reported on the development of geometrical based image processing techniques, especially on transformations based on the morphological operations of erosion, dilation, opening and closing. Mathematical morphology can be described geometrically, in terms of the actions of the operators on binary, monochrome or color images. The geometric description depends on small synthetic images called structuring elements. This form of mathematical morphology, often called structural morphology, is highly useful in the analysis and processing of images [65]-[70]. Since objects in nature are generally random in their shape, size and location the notion of a random set provides the means of studying the geometrical parameters of naturally occurring objects. Mathematical morphology was rst introduced for the case of binary images. The objects within a binary image are easily viewed as sets. The interaction between an image set and a second set, the structural element, produces transformations in the image. Measurements taken of the image set, the transformation set, and the di erence between the two provide information describing the interaction of the set with the structuring element. The interactions between the image set and the structuring element are setbased transformations. The intersection or union of translated, transposed or complimented versions of the image set and structuring element lter out information. Through the utilization of the umbra, an n-dimensional function described in terms of an (n + 1)-dimensional set, morphological transformations can be applied to monochrome images [67]. Thresholding of a monochrome image into a group of two-dimensional sets representing the

3.5 Adaptive Morphological Filters

147

three-dimensional umbra also provides a method of transforming gray scale images using the original de nitions of mathematical morphology. Throughout this book, color images are considered as vector signals and it is well known that the correlation of the color components is of paramount importance in the development of ecient color image processing techniques. In this section another aspect of color image processing is studied. That is, the e ect of geometrical adaptivity in processing color images. Morphological techniques developed for use with monochrome images can be extended to color images by applying the algorithm to each of the color component separately. Since morphology is readily de ned in terms of scalar signals, the individual color channels can be processed separately as three monochrome images [74]. The idea is to introduce new types of opening and closing operators that allow for the development of fast processing algorithms. The new operators utilize structuring elements that adapt their shape according to the local geometrical structures of the processed images. Although the proposed algorithm process color images in quite di erent ways from those of the vector based lters it can improve the quality of the processed color images by smoothing out noise while preserving ne details. Mathematical morphology is based in set theory. The reason for using set theory is to consider objects as being part of a space S . The description of an object is therefore reduced to describing an element or subset X is S . This section summarizes several de nitions related to mathematical morphology and relates them to the application of morphological transformations used in image processing, starting with the simplest case of binary images and expanding to the case of monochrome images. Consider initially a twodimensional space. This space may be pictured as a binary image where each object in the image is a subset of the digital space Z2 . The mathematical de nition of an object X in a binary image in terms of a set is: X = f(i; j ) : f (I; j ) = 1g (3.103) where f (:) is called the characteristic function of X . The remaining space in Z2 is the background or complement of X and is denoted by the set X c and de ned as: X c = f(i; j ) : f (I; j ) = 0g (3.104) The above de nitions may also be de ned in terms of vectors. If a vector x is the vector from the origin to the point (x; y) then (3.103) and (3.104) may become:

X = fx : f (x) = 1g X c = fx : f (x) = 0g (3.105) The set X also has associated with it its translate and transposition. The translate of X by a vector b is denoted as Xb . The transposition of X or the symmetric set of X is denoted by X .

148

3. Adaptive Image Filters

X = fx : x ; b2X g X = fx : (;x)2X g (3.106) Consider two sets, X and B . The set B is a set to be included in X if every element of B is also an element of X . If B hits X then the intersection of X and B produces a non-empty set. The opposite of B hitting X is B missing X . The intersection of these two sets is an empty set in this case. If the set of all possible subsets of S , denoted by F (S ), are considered and supposing that X and B are elements of F (S ) then the following de nitions may be

made:  B is included in X , B X )b2X; 8b2B  B hits X , B "X )B \X 6=;  B misses X )B X c Mathematical morphology is concerned with the interaction between the set X , the second set B , their complements, translates and transposes. The set X is usually associated with an image while the set B is the structural element. To relate mathematical morphology to other image processing techniques, the structuring element B corresponds to a mask used in linear FIR ltering. The interaction between image X and the structuring element B transforms the image to a new ` ltered' image. Depending on the size and shape of both X and B and the type of interaction looked at di erent transformations will result. It is these transformations which enable information to be extracted from an image for use in various applications. There exist two basic transformations in mathematical morphology. These are erosion and dilation . These two transformations form the basis over which all other morphological transformations exist. The erosion of X by B is de ned as being the set of all translation vectors so that when B is translated by any of them, its translate is included in X . Assume that Y is the eroded set of X , then in mathematical terms: Y = fx : Bx X g (3.107) The operation resembles the de nition of the Minkowski subtraction:

\

X B = b2B Xb

(3.108)

in the sense that morphological erosion of a set X by the structuring element B is the Minkowski subtraction of X and B , the symmetric set of B . \ \ Y = fx : Bx X g = b2B X;b = ;b2B X;b = X B (3.109) To derive the de nition of the second basic morphological transformation, dilation, consider an operation which is dual to erosion. A transformation which is the dual of another is de ned as being the resultant transformation

3.5 Adaptive Morphological Filters

149

when a known operator is applied to the complement of a set and the complement of the result is taken. Assuming that dilation is the dual translation of erosion the following equation can be obtained: X B = [X c B c ]c (3.110) The erosion determines all of Bb which are included in X c . This is equivalent to determining all the Bb which do not hit X . The complement of the set which this statement produces must therefore be the set of all Bb which hit X . This is the de nition of the morphological transformation of dilation. X B = fx : BxX 6=;g = fx : Bx "X g (3.111) Erosion and dilation respectively shrink and expand an image object. However, they are not inverses of each other. That is, these transformations are not information preserving. A dilation after an erosion will not necessarily return the image to its original state nor will an erosion of a dilated object necessarily restore the original object. This loss of information from these two operators and the results obtained when cascading the two transformations one after the other provide the basis for the de nition of another pair of morphological transformations called morphological opening and closing. The morphological transformations of opening and closing are not exactly de ned as being the cascading of an erosion followed by a dilation or a dilation followed by an erosion. The symmetric set of B is not used in both steps. A morphological erosion or dilation as de ned above is rst performed on an image X using a structuring element B . The second transformation is, respectively, a dilation or an erosion not by the structuring element B but by the symmetric set B . Formally, a mathematical opening of a set X by a structuring element B , denoted by X B , is de ned in terms of the Minkowski operators as the following: X B = (X B )B (3.112) This transformation is an erosion of X using a structuring element B followed by a dilation using a structuring element equal to B . In other words the set is rst shrunk and then re-expanded, not necessarily to its original state. The de nition of the morphological closing of a set X by a structuring element b may be derived in a similar fashion. Morphological closing, denoted by X B , is the dual operation of morphological opening and may also be de ned in terms of the Minkowski operators as follows: X B = (X B ) B (3.113) The discussion of mathematical morphology has until now been restricted to binary images and the space Z 2 . Extending the de nitions enables the use of mathematical morphology on monochrome (gray scale) images. In the binary case the two dimensions were the (i; j ) coordinates of the binary image. If the gray scale values of an image X (i; j ) are taken as the third dimension

150

3. Adaptive Image Filters

then an image becomes a surface in Z 3 . The term umbra U [X ] was de ned in [67] as a set which extends unbroken inde nitely downward in the negative Z direction below the two dimensional function's surface. A point p = (i; j; k) is an element of an image's umbra if and only if kX (i; j ). An image's umbra is a set in Z 3 . Once this de nition of a set in a three-dimensional space representing a monochrome image is made, the extension of morphological transformations to monochrome images is quite simple. Structuring elements also become two dimensional functions de ned over a domain. The set associated with the two dimensional structuring element function is de ned as all points (i; j; k) such that k is non-negative and (i; j ) lies in the domain over which B is de ned. If the structuring element is restricted so that B (i; j ) is uniformly equal to zero over the entire domain of B , then B is considered to be a at structuring element [69]. Once the assumption of B being at is made, the set associated with the structuring element becomes a set in two dimensions. This set is simply the set of all points (i; j ) over which B is de ned. Therefore, the de nitions of monochrome erosion and dilation simplify to: (X B )(i; j ) = min(t ;t )2Bi;j X (t1 ; t2 ) (X B )(i; j ) = max(t ;t )2Bi;j X (t1 ; t2 ) (3.114) where B now represents the set of all points in the structuring element, B is the symmetric set of B , and Bi;j is the translation of the set B by the two dimensional vector (i; j ). (X B )(i; j ) and (X B )(i; j ) represent the two dimensional functions corresponding to the resultant images obtained after eroding or dilating an image X by a at structuring element B . Using (3.107) and (3.108) and the above de nitions of gray scale erosion and dilation, the monochrome morphological transformations of opening and closing may be de ned as: 1

1

(X B )(i; j ) = (X B )(i; j ) =

2

2

max

[

min

(X (t1 ; t2 ))]

min

[

max

(X (t1 ; t2 ))]

(s1 ;s2 ):(i;j )2Bs1 ;s2 (t1 ;t2 )2Bs1 ;s2 (s1 ;s2 ):(i;j )2Bs1 ;s2 (t1 ;t2 )2Bs1 ;s2

(3.115)

The de nitions of monochrome morphological transformations describe transformations which enable noisy images to be ltered. Combinations of several transformations may be used to improve the noise removal of morphological monochrome lters which combine the results from several openings and closings of an image X by a set of several structuring elements. The conventional morphological operators have one structuring element with xed size and xed shape. It has been known that the xed structuring element may cause the lose of image details, the loss of the detailed parts of large objects and may cause distortion in smooth areas in noise ltering.

3.5 Adaptive Morphological Filters

151

In fact many other operators with one xed operational window may share the same problems. Many approaches have been suggested to deal with those problems. Among them, a type of new opening operators (NOP) and closing operators (NCP) were introduced in [72]. The structuring element of NOP and NCP adapts its shape according to the local geometric structures of the proposed images, and can be any shape formed by connecting a given number of N pixels. The NOP can be developed on the basis of (3.112)-(3.115). The opening de nition in (3.115) states that for a at structuring element acting like a moving window tting over the features around the pixel (i; j ) from the inside of the surface, the output value for the pixel (i; j ) is the minimum value in the tted window B. The group opening de ned in (3.115) computes the maximum over all the minima obtained from the opening by each individual Gk . To achieve a larger degree of freedom in manipulating the geometric structures in the images than that of (3.115) a large set of group openings is required before selecting the maximum as output. Denoting the set of all possible structuring elements formed by connecting N points as  N , the NOP is de ned as: (X  N )(i; j ) = max [(X B k )](i; j ) (3.116) k N B 2

In practice it is impossible to compute (3.116) by the conventional opening operations, since there are too many elements in  N . To alleviate the problem, a way to directly search for the resulting structuring element obtained from the maximum of group openings was devised in [72]. In essence, the algorithm nds a connected structuring element that best ts the local structure then attens the feature by assigning the minimum value within the domain of the selected element to the pixel (i; j ). A structuring element that ts best to the local feature is the trace of the maxima containing the point (i; j ). In other words, an adaptive structuring element which maximizes the minimum is searched. In this way, information extracted from an image is biased only by the size, not the shape of the structuring element. The size N is usually chosen according to the requirement of a speci c image processing task. According to the above, the objective of the NOP operator is to nd N connected points that trace the maxima of the local feature along and include the point (i; j ), then assign the minimum value in the window of these chosen N points as the output for the pixel (i; j ). Based on the above interpretation of NOP it is now possible to develop a fast algorithm for computing it. The NCP, as the dual of the NOP, has its structuring element follow the trace of the minima of the local feature containing the point (i; j ). The output (i; j ) is assigned the maximum value of those pixels within the domain of the adaptive structuring element. The NCP at point (i; j ) is essentially a search for an adaptive structuring element which minimizes the maximum. Thus, it is easy to complement the NOP de nition to that of the NCP: (X  N )(i; j ) = max [(X B k )](i; j ) (3.117) k N B 2

152

3. Adaptive Image Filters

Based on (3.117) NCP at point (i; j ) has to nd N connected points that trace the minima of the local feature along and include the point (i; j ) then assign the maximum value in the window of these chosen N points as the output for the pixel (i; j ). NCP lls any valley smaller than N points to form a larger basin of at least N points, whose shape contains the adaptive structuring element. If the area of a uniform basin is larger than or equal to N pixels, its surface structure will not be altered. Other points of the surfaces, such as the slopes and the peaks, will remain intact under the NCP operation. It should be noted that the NOP (NCP) cannot be decomposed into an erosion (dilation) followed by a dilation (erosion).

3.5.2 Computation of the NOP and the NCP Since NOP and NCP are derived from the conventional opening and closing operators, they share many of their properties, such as translation invariance, increasing, ordering and idempotency. The new operators also attain some distinct properties that exploit the geometric structures. The intuitive geometric operations are the most distinguishing characteristics of the NOP and the NCP. They di er from most of the existing linear and nonlinear processing techniques discussed in this book. The de nition and the properties of the NOP and NCP show a great potential to develop fast algorithms. To fully develop the potential is a complicated problem that requires considerable e ort. The basic algorithm structure proposed in [73] is only a straight forward realization of the de nition of the NOP and NCP. Study has shown that from the basic structure, there are many ways for further development. In this section, a fast and computationally ecient algorithm for the computation of NOP and NCP is reviewed. The core of the NOP and NCP is the search for the adaptive structuring element which follows the shape of the local features. An essential requirement in the search is connectivity. That is, the N -point structuring element must be connected via the current pixel (i; j ). The search procedure is iterative until all N points in accordance to the NOP or the NCP de nition are found. The NOP algorithm can be divided into ve steps, of which the middle three are repeated in nding the N points which trace the local feature with the largest values. The ve steps are: 1. Initialization: All bu ers, counters, registers and ags are initialized. 2. Search: Immediate neighborhoods of those newly included points in the structuring element during the previous iteration are searched, agged and identi ed as possible candidates. The candidates b = (bi ; bj ) are arranged in descending order of their pixel values f (b) as follows: f (b1 )  f (bk )   Since the best structuring element for the opening follows the maxima of the local feature, the largest K numbers of the ordered candidates are

3.5 Adaptive Morphological Filters

153

singled out for the decision in step 3. In other words, K is the lesser of the number of points to be found and the number of possible candidates. The rest of the candidates are purged while their ags remain set to indicate exclusion from any further iteration. 3. Decision: The K candidates are examined for inclusion in the set of the structuring element, by comparing to the minimum value fMIN of the points chosen. Initially, fMIN = f (i; j ). There are three possible cases: a) All K candidates have pixel values larger than or equal to fMIN :

f bK fMIN

In this case, the coordinates of all K candidates in the set are assigned in the set of the structuring element. b) Some of the K candidates have pixel values smaller than fMIN .

f bK fMIN f bk < fMIN 1kK

Only those coordinates with pixel values not smaller than fMIN are assigned in the set of the structuring element. Others are left as candidates for the next search cycle. c) All K candidates have pixel values smaller than fMIN .

f bK < fMIN

In this case, the coordinates, b1 of the largest pixel value is included in the set of the structuring element as a connecting point to the larger outer points. fMIN is also replaced by f b1. 4. Update: Bu ers, counters and registers are updated according to the decision made in step (3). If less than N points have been located, steps (2) to (4) are repeated. Otherwise, step (5) is followed to output. 5. Output: Assign the minimum pixel value fMIN in the window of the N point structuring element as output for the pixel (i; j ). The search is complete. To ensure the search progress is smooth, there are two bu ers, three counters, and two registers to keep track of the records in each iteration: 1. Bu er fag stores the pixel coordinates chosen to be in the set of the adaptive structuring element. Initially, fag contains (i; j ). 2. Bu er fbg stores the coordinates of all possible candidates for the current iteration. These include the rejected bk , and the immediate neighbors of those added to fag during the previous iteration. Initially, fbg contains the eight immediate neighbors of (i; j ). 3. Counter M keeps count of the number of pixels that have been located for the structuring element. Initially, M = 1 since (i; j ) is always included in the set. 4. Counter BN stores the number of all possible candidates. 5. Counter K stores the number of pixels to be decided upon for the current iteration. If N 9, K is usually set to N ; M . In the case of N > 9, K is set to the lesser of N ; M and BN .

154

3. Adaptive Image Filters

6. Register L holds the position of the last entry in fag. This ensures that only the neighbors of those points added to the structuring element during the current iteration will be searched in the next cycle. 7. A register to store the smallest pixel value fMIN in the domain of the structuring element. In addition, every pixel is associated with a ag which will be set once the pixel is chosen to be included in the search. The ag guarantees that a pixel will not be searched twice. The area of possible domain for the structuring element is the rectangle bounded exclusively by ((i ; N; j ; N ); (i ; N; j + N ); (i + N; j ; N ); (i + N; j + N )). To ensure that the search will not go beyond the image frame, the original image is augmented with a one-pixel wide frame whose values equal the smallest pixel value. A owchart of the NOP algorithm is shown in Fig. 3.7. The NCP algorithm can be derived directly from its NOP dual with the following changes: 1. Reverse the ordering so that the coordinates are put in ascending order of the pixel value: f (b1 )  f (bk )  f (b)BN The K smallest numbers of the ordered candidates are included for decision. 2. fMIN is changed to fMAX , such that the maximum pixel value in the set of the structuring element is stored and output. 3. The candidates bk is chosen if f (b)k for 1kK is smaller than, or equal to fMAX . That is, all comparison inequalities between f (bk ) and fMAX are reversed. Moreover, the original image is augmented with one-pixel wide frame whose values equal the largest pixel value.

3.5.3 Computational Complexity and Fast Algorithms In the algorithms discussed above, the major computational e orts lie in the ordering and the comparison of the pixel values. With a good and ecient ordering algorithm, it is fair to say that the number of comparisons used for an image represents the computational complexity to a certain degree. Therefore, reducing the number of comparisons required per pixel will result in less computational e ort. Fast algorithms can be obtained by exploiting the nature of basic search. Keeping the connectivity in mind and the fact that the search of maximum path in NOP is performed outward from the current computation point (i; j ), close examination reveals that those pixels included in the nal structuring element before locating the minimum pixel for the rst time, will share the same output fMIN as (i; j ).

3.5 Adaptive Morphological Filters

155

Fig. 3.7. A owchart of the NOP research algorithm Denote the included pixels as z1 ; z2 ;   ; zk ;   ; zN where k corresponds to the order in which zk is found in the search. The current computation point is always included as z1 = (i; j ). There may be more than one of the included points in the structuring element  N whose values equal to the minimum value fMIN for the output of NOP at (i; j ). Assume that zk ;   ; zkt , where 1k1 k2   kt  N and 1tN , as the pixels whose values equal fMIN , then for 1kk1 the following is true: (X N )(zk ) = f (zk ) = fMINij (3.118) That is, the pixels zk for 1kk1 and the pixels zki for i = 2;   ; t do not require a search for the structuring element of their own, since they share the same output as (i; j ). 1

1

156

3. Adaptive Image Filters

The same property can be applied to the NCP except that zk ;   ; zkt are now pixels whose values equal to fMAX . The output values for the pixels zk 2;   ; zk t are the same as their input values fMAX and the output values for the pixels located in the set  N before fMAX is rst located, are assigned: (X N )(zk ) = f (zk ) = fMAXij (3.119) To implement the fast algorithms for NOP and NCP a ag for every pixel in the image only needs to be included. The ags of those zk satisfying (3.118) or (3.119), and zk 2 ;   ; zk t , will be set to signify that their output values have been determined when they become the current position. One way to speed up the search is to test if the neighborhood is a uniform area, that is, if f (b1 ) = f (bK ), at the beginning of the decision step. If it is a uniform area, then all bk for 1kK are included in the structuring element and (X N )(i; j ) = min (fMIN ; f (bK )) (3.120) (X N )(i; j ) = max (fMAX ; f (bK )) (3.121) For a uniform area, all eight neighbors of (i; j ) are located and included in  N in one operation. These points will also share the same output as (i; j ). That is, the output ags of all N points are set. The actual computational complexity of the NOP and NCP depends on the image to be processed. In the simplest case where (i; j ) is in a uniform area, only a few comparisons are required before the resultant structuring element is located. The worst case happens is the pixel (i; j ) is at the end of a one point wide line. In this case, only one pixel is located in each iteration of the search and the resultant computational burden is high. The NOP and NCP are usually used together to construct an adaptive morphological lter. In general, the adaptive morphological lter is a two stage lter. The rst stage is the processing by the NOP and the NCP. The second stage is the post processing of the image. Post processing is required because noise patterns connected to the edge of large objects will be considered as part of the large objects by the NOP and the NCP, and will not be ltered. The procedures of a simple and direct post processing are described as follows: 1. Denote the image ltered by an adaptive morphological lter as y(i; j ). Decompose y(i; j ) into a coarse image z1(i; j ) and a detailed image z2 (i; j ) by a conventional opening or closing morphological lter with a small structuring element, such as a (22) element. This decomposition separates the noise patterns from the large objects: z1 (i; j ) = (yB )B (3.122) z2 (i; j ) = y(i; j ) ; z1 (i; j ) (3.123) 1

1

3.6 Simulation Studies

157

As a result, z1 (i; j ) contains only objects not smaller than B , leaving z2 (i; j ) with the isolated noise pixels and the ne details with size smaller than B . 2. Remove the noise patterns in z2(i; j ) by the same adaptive morphological lter but with a smaller size structuring element, which will remove the noise but leave the ne detail intact. The ltered image z3(i; j ) will contain only the ne details. Thus, for N1 < N : z3 (i; j ) = (z2  N )N (3.124) 3. Output the nal image y (i; j ) by adding the noise free details back to the coarse image. The post process image y (i; j ) has sharper edges than y(i; j ): y (i; j ) = z1 (i; j ) + z3 (i; j ) (3.125) The main drawback of this simple post processing method is that it cannot remove noise pixels connected to one-pixel wide details. Although more sophisticated post processing methods can be used to deliver better results, these remaining noise pixels are usually negligible since the human eye is more tolerable to small amounts of noise in the neighborhood of an edge. 1

Fig. 3.8. The adaptive morphological lter

3.6 Simulation Studies A set of experiments has been conducted in order to evaluate the adaptive designs presented. In this rst part, the performance of adaptive designs based on fuzzy, statistical and non-parametric techniques with that of commonly used lters, such as the vector median lter (VMF), the generalized vector directional lter (GVDF), the distance-direction lter (DDF) and the hybrid lters of [79] are compared. The noise attenuation properties of the di erent lters are examined by utilizing the color images `Lenna' and `Peppers'. The test images have been contaminated using various noise source models in

158

3. Adaptive Image Filters

order to assess the performance of the lters under di erent scenarios (see Table 3.3). The original images as well as their noisy versions are represented in the RGB color space. The lters operate on the images in the RGB color space.

Table 3.1. Noise Distributions Number 1 2 3 4

Noise Model Gaussian impulsive Gaussian , impulsive Gaussian , impulsive

( = 15) ( = 30)

( = 30) (4%)

(2%) (4%)

Since it is impossible to discuss all the fuzzy adaptive lters resulting from the theory introduced in this chapter, ve di erent lters based on the designs are constructed. These lters are compared, in terms of performance, with other widely used multichannel lters (see Table 3.2). In particular, a simple rank-order lter is introduced, based on the distance measure of [36], hereafter content-based rank lter (CBRF), which can be seen as an adaptive fuzzy system with the defuzzi cation rule of (3.27). The simulation studies also include the fuzzy vector directional lter (FVDF) which is based on the defuzzi cation strategy of (3.2), the membership formula of (3.32) and the aggregated distance of (3.35) evaluated over the ltering window W (n). The Adaptive Nearest Neighbor Filter (ANNF) lter [37] based on the defuzzi cation strategy of (3.2), the membership function formula of (3.23), and the distance measure of (3.33) is also included in the set. Further, the same defuzzi cation formula and the same membership function is utilized along with the aggregated distance of (3.29) to derive the double window nearest neighbor lter, hereafter ANNMF. By using the Canberra distance and the distance measure of (3.27) instead of the angular distance, four other lters have been devised, named CANNF, CANNMF, CBANNF and CBANNMF respectively (see Table 3.2). A number of di erent objective measures can be utilized to assess the performance of the di erent lters. All of them provide some measure of closeness between two digital images by exploiting the di erences in the statistical distributions of the pixel values [1], [11]. The most widely used measure is the normalized mean square error (NMSE) de ned as: PN 1 PN 2 k(y(i; j ) ; y^ (i; j )k2 NMSE = i=0PNj1=0PN 2 (3.126) 2 i=0 j =0 k(y(i; j )k where N 1 , N 2 are the image dimensions, and y(i; j ) and y^ (i; j ) denote the original image vector and the estimation at pixel (i; j ) , respectively. In many application areas, such as multimedia, telecommunications (e.g. HDTV), production of motion pictures, printing industry and graphic arts,

3.6 Simulation Studies

159

Table 3.2. Filters Compared Notation AMF VMF BVDF CBRF GVDF DDF HF AHF FVDF

ANNF ANNMF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFE AMNFD BFMA

Filter Arithmetic (Linear) Mean Filter Vector Median Filter Basic Vector Directional Filter Content-based Rank Filter, eq. Generalized Vector Directional Filter with an -trimmed magnitude module, ( ) Directional-Distance Filter Hybrid Directional Filter Adaptive Hybrid Directional Filter Fuzzy Vector Directional Filter with structure/weights determined through (3.2), (3.32), (3.35) , Adaptive Nearest Neighbor Filter with (3.2), (3.23), (3.32) Double window Adaptive Nearest Neighbor Filter with (3.2), (3.23), (3.24) Adaptive Nearest Neighbor Filter eqs. (3.2) (3.23), (3.33) Double window adaptive nearest neighbor filter (3.2), (3.23), (3.25) Adaptive Nearest Neighbor Filter, (3.2) (3.22), (3.34) Double window adaptive nearest neighbor filter with (3.2), (3.22) and (3.27) Adaptive Non-parametric Filter with an exponential kernel, (3.93) Adaptive Non-parametric Filter with an Gaussian kernel, (3.93) Adaptive Non-parametric Filter with a directional kernel, (3.97) Bayesian Adaptive filter with median and mean sub-filters



= 1: 5

Reference [1] [76] [42] [36] [43] [78] [79] [79] [22]

r=1 =2

[34] [37]

[11] [11]

[11]

greater emphasis is given to perceptual image quality. Consequently, the perceptual closeness (alternatively the perceptual di erence or error) of the ltered image to the uncorrupted original image is ultimately the best measure of the eciency of any color image ltering method. There are basically two major approaches used for assessing the perceptual error between two color images. In order to make a complete and thorough assessment of the performance of the various lters, both approaches are used in this section. The rst approach is to make an objective measure of the perceptual error between two color images. This leads to the question of how to estimate the perceptual error between two color vectors. Precise quanti cation of the perceptual error between two color vectors is one of the most important and open research problem. RGB is the most popular color space used conven-

160

3. Adaptive Image Filters

tionally to store, process, display, and analyze color images. However, the human perception of color cannot be described using the RGB model. Therefore, measures such as the normalized mean square error (NMSE) de ned in the RGB color space are not appropriate to quantify the perceptual error between images. Thus, it is important to use color spaces which are closely related to the human perceptual characteristics and suitable for de ning appropriate measures of perceptual error between color vectors. A number of such color spaces are used in areas such as computer graphics, motion pictures, graphic arts, and printing industry. Among these, perceptually uniform color spaces are the most appropriate to de ne simple yet precise measures of perceptual error. As seen in Chap. 1 the Commission Internationale de l'Eclairage (CIE) standardized two color spaces, the L u v and L a b , as perceptually uniform. The L u v color space is chosen for this analysis because it is simpler in computation than the L ab color space, without any sacri ce in perceptual uniformity. The conversion from the non-linear RGB color space (the non-linear RGB values are the ones stored in the computer and applied to the CRT of the monitor to generate the image) to the L u v color space is explained in detail in Chap. 1 and elsewhere [80]. Non-linear RGB values of both, the uncorrupted original image and the ltered image, are converted to corresponding L u v values for each of the ltering methods under consideration. In the L u v space, the L component de nes the lightness and the u and v components together de ne the chromaticity. In a uniform color space, such as the L u v , the perceptual color error between two color vectors is de ned as the Euclidean distance between them given by :

ELuv = [(L )2 + (u )2 + (v )2 ] (3.127) where ELuv is the color error and L, u , and v are the di erence in the L, u, and v components, respectively, between the two color vectors under consideration. Once the ELuv for each pixel of the image under 1 2

consideration is computed, the normalized color distance (NCD) is estimated according to the following formula:

PN 1 PN 2 kE k Luv j =0 NCD = Pi=0 N 1 PN 2 kE  k i=0

j =0

(3.128)

Luv

 = [(L )2 + (u )2 + (v )2 ] is the norm or magnitude of the where ELuv uncorrupted original image pixel vector in the L uv space. Although quantitative measures, such as ELuv and NCD are close approximations to the perceptual error they cannot exactly characterize the quite complex attributes of the human perception. Therefore, an alternative subjective approach is commonly used by researchers [81] for estimating the perceptual error. 1 2

3.6 Simulation Studies

161

The second approach, the easiest and simplest, is the subjective evaluation of the two images to be compared in which both images are viewed, simultaneously, under identical viewing conditions by a set of observers. A set of color image quality attributes can be de ned for the subjective evaluation [81]. The evaluation must take into consideration important factors in image ltering. For the results presented, the performance is ranked subjectively in ve categories: excellent (5), very good (4), good (3), fair (2) and bad (1) using the following subjective criteria (see Table 3.4). 1. Detail preservation: which corresponds to edge and ne detail preservation. One of the most important criteria in the subjective examination of a lter performance is edge preservation. Color edges in an image may be de ned as a discontinuity or abrupt change in the color attributes. Edges are important features since they provide an excellent indication of the shape of the objects in the image. Maintaining the sharpness of the edges is as important as removing the noise in the image. The same holds true for ne details in the image. An image void of details looks plain and unpleasant. Therefore, it is important to distinguish the ne elements from the noise, so that they can be preserved during the ltering process. 2. Color appearance: which refers to color sharpness, the distinctness of boundaries among colors, and color uniformity which refers to the consistency of the color in uniform areas. The human eye is very sensitive to small changes in color. Therefore, it is important to keep the chromaticity (namely hue and saturation) constant while removing noise. The natural appearance of the color features in the scene must be preserved while arti cial contrast, color drift and other abberations that make the ltered image to look unpleasant should be avoided. 3. Defects: classify any imperfection such as blocking artifacts that was not present in the original (noise free) image.

Table 3.3. Subjective Image Evaluation Guidelines Score

Overall Evaluation

1 2 3 4 5

Very disruptive distortion Disruptive distortion Destructive but not disruptive distortion Perceivable but not destructive distortion Imperceivable distortion

Noise Removal Evaluation poor fair good very good excellent

162

3. Adaptive Image Filters

Table 3.4. Figure of Merit a b c d e

Overall Subjective Evaluation Additive Gaussian noise Impulsive noise Moderate mixed (Gaussian/impulsive) noise Mixed (Gaussian/impulsive) noise

In this study, the color images under consideration were viewed in parallel, on a SUN Sparc 20 with a 24-bit color monitor, and the observers were asked to mark scores on a printed evaluation sheet following the guidelines summarized in Table3.3 [82]. To subjectively evaluate the noise removal capabilities of the algorithms a similar procedure was followed. Observers were instructed to assign a lower number if noise was still present in the ltered output (Table 3.3). The second approach, the easiest and simplest, is the subjective evaluation of the resulting images when they are viewed, simultaneously, under identical viewing conditions by a set of observers. To this end, the performance of the di erent lters in noise attenuation using the test RGB image `Peppers' is compared. The image is corrupted by outliers ( 4% impulsive noise (Fig. 3.9)). The RGB color image `Lenna' is also used. The test image is corrupted with Gaussian noise  = 15 mixed with 2% impulsive noise (Fig. 3.10). All the lters considered in this section operate using a square 3  3 processing window. Filtering results using di erent estimators are depicted in (Fig. 3.18) and (Fig. 3.26). A visual comparison of the images clearly favors the adaptive designs over existing techniques. One of the obvious observations from the results in Tables 3.5-3.12 is the e ect of window size on the performance of the lter. In the case of rank-type lters, such as the VMF, BVDF, CBVF, DDF as well as the HF and the AHF, the bigger window size (55) gives considerably better results for the removal of Gaussian noise (noise model 1), while decreasing the performance for the removal of impulsive noise (noise model 2). Although a similar pattern follows for the adaptive lters, fuzzy, Bayesian or non-parametric the e ect of the window size on performance is less dramatic as compared to the rank-type of lters. Analysis of the results summarized here reveals the e ect that the distance (or similarity) measure can have on the lter output. Even lters which are based on the same concept, such as VDF, CVDF and CBVF, or ANNF and CANNF have di erent performance simply because a di erent distance measure is utilized to quantify dissimilarity among the color vectors. Similarly, double window adaptive lters have better smoothing abilities, outperforming the other lters, when a Gaussian noise or mixed noise model is assumed.

3.6 Simulation Studies

163

For the case of impulsive noise, the VMF gives the best performance among the rank-type lters according to the results, as well as the theory, and is thus used as a benchmark to evaluate the fuzzy adaptive designs. The proposed fuzzy lters perform close to the VMF and outperform existing adaptive designs, such as the HF or the AHF with respect to NMSE and NCD, and for both window sizes. For the case, of pure Gaussian noise, the VMF gives the worst results. The results summarized in Tables 3.5-3.12 indicate that the adaptive lters perform exceptionally well in this situation. The arithmetic mean lter (AMF) is theoretically the best non-adaptive lter for the removal of pure Gaussian noise (noise model 1). In other words, the NMSE, NCD, and the subjective measure all indicate the best performance by AMF. So the performance of the AMF lter is used as a benchmark to compare the performance of the new adaptive lters in the same noise environment. The results indicate that the adaptive lters, both fuzzy and non-parametric, perform better or close enough to the AMF and outperform existing adaptive lters, such as the AHF in NMSE, NCD and subjective sense. Clearly, the new AMNFG adaptive lter is the best for Gaussian noise and performs exceptionally well, outperforming the existing lters, both adaptive and non-adaptive, with respect to all three error measures and for both window sizes. For the mixture of Gaussian and impulsive noise (noise models 3 and 4), the adaptive fuzzy lters consistently outperform any of the existing listed lters, both rank type or adaptive with respect to NMSE and NCD. This is demonstrated by the simple fact that, for noise models 3 and 4 (see Table 3.1), the highest error among the new adaptive lters is comparable to the lowest error among the existing rank type, non-adaptive lters. Herein lies the real advantage of the adaptive designs, such as the fuzzy, Bayesian or non-parametric lters introduced here. In real applications, the noise model is unknown a-priori. Nevertheless, the most common noise types encountered in real situations are Gaussian, impulsive or a mixture of both. Therefore, the use of the proposed fuzzy adaptive lters guarantees near optimal performance for the removal of any kind of noise encountered in practical applications. On the contrary, application of a `noise-mismatched' lter, such as the VMF for Gaussian noise can have profound consequences leading to unacceptable results. In conclusion, from the results listed in the tables, it can easily be seen that the adaptive designs provide consistently good results in all types of noise, outperforming the other multichannel lters under consideration. The adaptive designs discussed here attenuate both impulsive and Gaussian noise. The versatile design of (3.1) allows for a number of di erent lters, which can provide solutions to many types of di erent ltering problems. Simple adaptive fuzzy designs, such as the ANNF or the CANNF can preserve edges and smooth noise under di erent scenarios, outperforming other widely used multichannel lters. If knowledge about the noise characteristics is available,

164

3. Adaptive Image Filters

the designer can tune the parameters of the adaptive lter to obtain better results. Finally, considering the number of computations, the computationally intensive part of the adaptive fuzzy system is the distance calculation part. However, this step is common in all multichannel algorithms considered here. In summary, the adaptive design is simple, does not increase the numerical complexity of the multichannel algorithm and delivers excellent results for complicated multichannel signals, such as real color images.

Table 3.5. NMSE(x10;2 ) for the RGB `Lenna' image, 33 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 4.2083 0.6963 2.8962 1.3990 1.4600 1.5240 1.6000 0.7335 0.8510 0.6591 1.3192 1.0585 0.8360 0.6001 0.8398 0.6011 0.5650 0.8417 0.8045 0.7286

2 5.1694 0.8186 0.3448 0.1863 0.3000 0.3255 0.1900 0.2481 0.2610 0.1930 0.2182 0.2017 0.2497 0.1891 0.2349 0.1894 0.1710 0.2006 0.2350 0.3067

3 3.6600 0.6160 0.4630 0.5280 0.6334 0.6483 0.5404 0.4010 0.3837 0.3264 0.5158 0.4636 0.3471 0.3087 0.3935 0.3087 0.3020 0.3578 0.3537 0.4284

4 9.0724 1.2980 1.1354 1.5168 1.9820 1.6791 1.6790 1.0390 1.0860 0.7988 1.6912 1.4355 1.0481 0.7137 1.0119 0.7149 0.6990 1.0070 1.0101 1.0718

3.6 Simulation Studies

Table 3.6. NMSE(x10;2 ) for the RGB `Lenna' image, 55 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 4.2083 0.5994 2.800 0.9258 1.0800 1.0242 1.1700 0.7549 0.6260 0.5445 0.7700 0.6762 0.5950 0.5208 0.5925 0.5201 0.5180 0.5140 0.4587 0.5809

2 5.1694 0.6656 0.7318 0.3180 0.5400 0.5126 0.5800 0.3087 0.4210 0.2505 0.3841 0.3772 0.4028 0.3017 0.3943 0.3014 0.3010 0.3070 0.3492 0.3146

3 3.6600 0.5702 0.6850 0.4890 0.4590 0.6913 0.5172 0.4076 0.4360 0.3426 0,4890 0.4367 0.4091 0.3671 0.4045 0.3662 0.3710 0.3620 0.4258 0.3799

4 9.0724 0.8896 1.3557 1.0061 1.1044 1.3048 1.0377 0.9550 0.7528 0.6211 1.1417 0.7528 0.7380 0.5802 0.7111 0.5795 0.5830 0.5810 0.8211 0.6637

Table 3.7. NMSE(x10;2 ) for the RGB `peppers' image, 33 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 5.0264 1.0611 3.9267 1.9622 1.8640 3.5090 1.8440 1.4550 1.1230 0.9080 1.5892 1.4278 1.1382 0.8994 1.2246 0.8964 1.1489 1.1130 1.1495 1.4118

2 6.5257 4.8990 1.5070 0.4650 0.4550 0.5886 0.3763 0.4246 0.5110 0.3550 0.4690 0.4246 0.4696 0.4526 0.4546 0.4546 0.4976 0.4984 0.4584 0.4887

3 3.2890 3.4195 0.8600 0.4354 0.3613 0.5336 0.3260 0.3412 0.3150 0.3005 0.3592 0.3566 0.3492 0.4284 0.4566 0.4300 0.4779 0.4786 0.3700 0.4494

4 6.5076 4.8970 1.4911 0.4711 0.4562 0.5893 0.3786 0.4046 0.5180 0.3347 0.4781 0.4692 0.4699 0.4545 0.4548 0.4548 0.4996 0.5084 0.4583 0.4876

165

166

3. Adaptive Image Filters

Table 3.8. NMSE(x10;2 ) for the RGB `peppers' image, 55 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 5.0264 0.9167 4.2698 1.4639 1.2534 2.1440 1.3390 2.1120 1.0027 0.8050 1.0040 1.1167 1.0281 0.8687 1.0145 0.8634 1.0001 0.09945 0.9889 1.1972

2 6.5257 1.7341 2.7920 0.7090 0.6977 0.7636 0.6740 0.7310 0.5230 0.4471 0.9970 0.9841 0.7393 0.6355 0.7281 0.6338 0.6665 0.6671 0.6540 0.48577

3 3.2890 2.1916 1.6499 0.6816 0.6600 0.7397 0.6563 0.6971 0.5200 0.4047 0.7684 0.7632 0.6718 0.6405 0.6677 0.6313 0.6527 0.6533 0.6155 0.4524

4 6.5076 1.1706 4.1350 0.7161 0.7030 0.7612 0.6812 0.7178 0.6210 0.4458 0.9970 0.9841 0.7426 0.6420 0.7310 0.6371 0.6686 0.6693 0.6555 0.4817

Table 3.9. NCD for the RGB `Lenna' image, 33 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 0.1149 0.0334 0.0508 0.0467 0.0462 0.0398 0.0432 0.0377 0.0338 0.0316 0.03824 0.0347 0.0222 0.0175 0.0229 0.0175 0.0311 0.0301 0.0218 0.0360

2 0.0875 0.0284 0.0082 0.0051 0.0079 0.0073 0.0053 0.0049 0.0061 0.0047 0.0061 0.0593 0.0057 0.0046 0.0055 0.0046 0.0151 0.0169 0.0054 0.0201

3 0.7338 0.0295 0.0210 0.0169 0.0191 0.0179 0.0238 0.0144 0.0149 0.01374 0.0147 0.0139 0.0090 0.0081 0.0089 0.0081 0.0213 0.0213 0.0091 0.0250

4 0.1908 0.0419 0.0708 0.0524 0.0489 0.0426 0.0419 0.0394 0.0412 0.0402 0.0486 0.0442 0.0255 0.0193 0.0250 0.01934 0.0331 0.0325 0.0283 0.0404

3.6 Simulation Studies

Table 3.10. NCD for the RGB `Lenna' image, 55 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 0.1149 0.0275 0.0408 0.0284 0.0220 0.0279 0.0193 0.0218 0.0202 0.0181 0.0199 0.0188 0.0129 0.0126 0.0130 0.0126 0.0261 0.0279 0.0140 0.0309

2 0.0875 0.0270 0.0084 0.0070 0.0089 0.0079 0.0062 0.0057 0.0071 0.0059 0.0097 0.0941 0.0078 0.0063 0.0077 0.0063 0.0173 0.0177 0.0070 0.0192

3 0.7338 0.0252 0.0267 0.0130 0.0189 0.0171 0.0236 0.0129 0.0120 0.0123 0.0123 0.0120 0.0085 0.0080 0.0084 0.0080 0.0212 0.0216 0.0086 0.0228

4 0.1908 0.0338 0.0631 0.0310 0.0474 0.0368 0.0344 0.0339 0.0329 0.0318 0.01205 0.0322 0.0153 0.0134 0.0150 0.0134 0.0281 0.0294 0.0168 0.0339

Table 3.11. NCD for the RGB `peppers' image, 33 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 0.2414 0.1042 0.1916 0.1579 0.1463 0.2113 0.1624 0.1217 0.1135 0.0997 0.1406 0.1346 0.1137 0.1009 0.1132 0.1007 0.1003 0.1007 0.109 0.1311

2 0.0854 0.1296 0.0774 0.0560 0.0631 0.0678 0.0559 0.0585 0.0642 0.0575 0.0609 0.0605 0.0610 0.0571 0.0605 0.0569 0.0597 0.0597 0.0621 0.0583

3 0.0831 0.1144 0.0668 0.0541 0.0596 0.0657 0.0533 0.0558 0.0578 0.0565 0.0553 0.0557 0.0561 0.0560 0.0558 0.0559 0.0585 0.0584 0.0584 0.0566

4 0.0859 0.1298 0.0775 0.0561 0.0639 0.0679 0.0558 0.0591 0.0643 0.0579 0.0605 0.0601 0.0610 0.0574 0.0606 0.0570 0.0598 0.0597 0.0623 0.0582

167

168

3. Adaptive Image Filters

Table 3.12. NCD for the RGB `peppers' image, 55 window Filter None AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CANNMF CBANNF CBANNMF AMNFE AMNFG AMNFD BFMA

Noise Model 1 0.2414 0.0916 0.186235 0.1281 0.1384 0.1613 0.1301 0.1310 0.0917 0.0895 0.1118 0.1070 0.0896 0.0896 0.0988 0.0893 0.0915 0.0917 0.0917 0.1191

2 0.0854 0.1029 0.1056 0.0657 0.0941 0.0706 0.0662 0.0658 0.0760 0.0657 0.0798 0.0795 0.0652 0.0652 0.07246 0.0649 0.0671 0.0670 0.0687 0.0579

3 0.0831 0.0944 0.0867 0.0646 0.0870 0.0695 0.0648 0.0644 0.0698 0.0652 0.0697 0.0699 0.0651 0.0651 0.06837 0.06452 0.0660 0.0659 0.0672 0.0563

4 0.0859 0.1028 0.1047 0.0659 0.0946 0.0706 0.0663 0.0659 0.0760 0.0658 0.0798 0.0792 0.0659 0.0659 0.0725 0.0651 0.0672 0.0671 0.0688 0.0577

Table 3.13. Subjective Evaluation Filter Filter AMF BVDF CBRF GVDF DDF VMF FVDF ANNF ANNMF HF AHF CANNF CAMMMF CBANNF CBANNMF AMNFE AMNFG MAMNFD BFMA

Figure of Merit a 2 3 3 3 3 4 3 4 4 3 3 4 4 4 4 4 4 4 4

b 5 3 3 3 3 2 3 3 3 3 3 3 4 3 4 4 4 4 5

c 1 3 3 4 3 5 4 3 3 3 4 3 3 3 3 5 5 5 4

d 3 2 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 4 4

e 4 1 3 3 2 3 3 3 3 3 3 3 3 3 3 4 5 5 4

3.6 Simulation Studies

169

Fig. 3.9. `Peppers' corrupted by 4% impulsive noise

Fig. 3.10. `Lenna' corrupted with Gaussian noise  = 15 mixed with 2% impulsive noise

The e ect of geometric adaptability in color image processing is also studied in this section through experimental analysis. The processing task on hand is color image ltering. The noise is distributed randomly in the RGB image planes. The adaptive morphological lter processes each of the three color components separately. Since there is no generally accepted quantitative di erence measure for image quality, the normalized mean square error (NMSE) is used to quantify the performance of the lters. As in the experiments discussed previously, four subjective criteria are introduced to quantify the performance of the di erent lters under consideration. 1. Edge preservation 2. Detail preservation 3. Color appearance 4. Smoothness of uniform areas

170

3. Adaptive Image Filters

Fig. 3.11. V MF of (3.9) using 3X3

Fig. 3.12. BV DF of (3.9) using 3X3

Fig. 3.13. HF of (3.9) using 3X3

Fig. 3.14. AHF of (3.9) using 3X3

Fig. 3.15. FV DF of (3.9) using 3X3

Fig. 3.16. ANNMF of (3.9) using

Fig. 3.17. CANNMF of (3.9) using

Fig. 3.18. BFMA of (3.9) using

window

window

window

3X3 window

window

window

3X3 window

3X3 window

3.6 Simulation Studies

171

Fig. 3.19. V MF of (3.10) using 3X3

Fig. 3.20. BV DF of (3.10) using

Fig. 3.21. HF of (3.10) using 3X3

Fig. 3.22. AHF of (3.10) using 3X3

Fig. 3.23. FV DF of (3.10) using

Fig. 3.24. ANNMF of (3.10) using

Fig. 3.25. CANNMF of (3.10) us-

Fig. 3.26. BFMA of (3.10) using

window

window

3X3 window

ing 3X3 window

3X3 window

window

3X3 window

3X3 window

172

3. Adaptive Image Filters

The performance is ranked subjectively in ve categories: excellent (5), very good (4), good (3), fair (2) and bad (1). The RGB color image `Mandrill' is used for comparison purposes. The size and the shape of the window and the operation sequence for each lter is chosen such that the best result can be obtained from the lter. In particular, the three lters compared here are: 1. Non-adaptive morphological close-opening lter with four 5-point onedimensional structuring elements oriented in 0o, 45o, 90o, and 135o. 2. NOP-NCP lter with a 9-point (N = 9) adaptive structuring element. The structuring element B used in the open-closing of the post processing in obtaining the coarse image z1 is a (22) window. A 5-point (N1 = 5) adaptive structuring element is used in the NOP-NCP ltering of the isolated noise patterns in z2 . 3. Vector median lter (VMF) with a (33) square window. The performance of the lters on the noise corrupted Mandrill image is illustrated in Table 3.14.

Table 3.14. Performance measures for the image Mandrill Filter Close-opening NOP-NCP VMF

NMSE

10;2 1:7234 1:2823 2:31

Edge preserv.

2 5 5

Detail preserv.

2 5 4

Color appear.

2 4 4

CPU time

6:26 14:39 8:70

The adaptive morphological lter is the most e ective in providing a good quality image in terms of detail, edge and color preservation. The closeopening morphological lter is quite e ective in removing the noise, but it tends to blur the image, cause unnatural color, and smooth the texture. Its performance also depends on the patterns in the image. It provides good ltering only if the object patterns are aligned in the same directions as the four structuring elements, and hence limiting its e ectiveness in general cases. The experiment demonstrates the way that the morphological lter deals with color distortion. Because of its adaptive structuring element, the morphological lter can best preserve the detailed structures in each color component, and thus achieves a better nal result. By comparison, VMF utilizes the color correlation, and thus shows a strong performance along the edges of large blocks. But the xed window of the lter cannot t into many detailed structures of the image. Thus, the vector median lter may alter those structures and create arti cial patterns. The sudden change of color vectors at those patterns appears as arti cial color changes. According to the experimental results reported here it can be concluded that to best preserve the color quality, not only does the correlation between the three color channels have to be utilized, but the images have to be processed according to the

3.7 Conclusions

173

Fig. 3.27. `Mandrill' - 10% impulsive

Fig. 3.28. NOP-NCP ltering results

Fig. 3.29. V MF using 3X3 window

Fig. 3.30. Mutistage Close-opening

noise

ltering results

image structure. The computation time taken by the close-opening lter and the VMF is relatively short compared to that of the NOP-NCP lter. The adaptive morphological lter provides the best detail and highlight preservation at the expense of more CPU time. Although research is needed towards more ecient algorithms that can provide a faster search for the structuring elements, the geometric approach to color image processing can be proven valuable in applications where no prior knowledge of the image statistics is available.

3.7 Conclusions In this chapter adaptive lters suitable for color image processing have been discussed. The behavior of these adaptive designs was analyzed and their performance was compared to that of the most commonly used nonlinear lters. Particular emphasis was given to the formulation of the problem and

174

3. Adaptive Image Filters

the lter design procedure. To fully assess the applicability of the adaptive techniques, further analysis is required on algorithms and architectures which may be used for the realization of the adaptive designs. Issues, such as speed, modularity, the e ect of nite precision arithmetic, cost and software transportability should be addressed. The adaptive designs not only have a rigid theoretical foundation but promising performance in a variety of noise characteristics. Indeed, the simulation results included and the subjective evaluation of the ltered color images indicate that adaptive lters compare favorably with other techniques in use to date. The rich and expanding area of color signal processing underline the importance of the tools presented here. In addition to color image processing, application areas, such as multi-modal signal processing, telecommunication applications, such as channel equalization and digital audio restoration, satellite imagery, multichannel signal processing for seismic deconvolution and applications in biomedicine, such as multi electrode ECG/EEG and CT scans, to name a few, are potential application elds of the adaptive methodologies discussed in this chapter. Problems motivated by the new applications demand investigations into algorithms and methodologies which may result in even more e ective adaptive ltering structures.

References 1. Pitas, I., Venetsanopoulos, A .N. (1990): Nonlinear Digital Filters: Principles and Applications. Kluwer Academic Publishers, Boston, MA. 2. J.S. Lee, J.S. (1980): Digital image enhancement and noise ltering by local statistics. IEEE Trans. on Pattern Recognition and Machine Intelligence, 2, 165-168. 3. Sun, X.Z., Venetsanopoulos, A.N. (1988): Adaptive schemes for noise ltering and edge detection by use of local statistics. IEEE Trans. on Circuit and Systems, 35(1), 59-69. 4. Cotropoulos, C., Pitas, I (1994): Adaptive nonlinear lter for digital signal/image processing. (Advances In 2D and 3D Digital Processing, Techniques and Applications, edited by C.T. Leondes), Academic Press, 67, 263-317. 5. Kosko, B. (1991): Neural Networks for Signal Processing. Prentice Hall, Englewood Cli s, N.J., USA. 6. Yin, L., Astola, J., Neuvo, Y., (1993): A new class of nonlinear lters: Neural lters. IEEE Trans. on Signal Processing. 41, 1201-1222. 7. Russo, F. (1996): Nonlinear fuzzy lters: An overview. Proceedings European Signal Processing Conference, VIII, 1709-1712. 8. Y. Choi, Y., Krishnapuram, R., A robust approach to image enhancement based on fuzzy logic. IEEE Trans. on Image Processing, 6(6), 808-825. 9. Yu, P.T., Chung Chen, R. (1996): Fuzzy stack lters: Their de nitions, fundamental properties and application in image processing. IEEE Trans. on Image Processing, 5(6), 838-854. 10. Russo, F., Ramponi, G. (1996): A fuzzy lter for images corrupted by impulsive noise. IEEE Signal Processing Letters, 3(6), 168-170.

References

175

11. Plataniotis, K.N., Androutsos, D., Vinayagamoorthy, S., Venetsanopoulos, A.N. (1997): Color image processing using adaptive multichannel lters. IEEE Trans. on Image Processing, 6(7), 933-950. 12. Russo, F., Ramponi, G. (1994): Nonlinear fuzzy operators for image processing. Signal Processing, 38(4), 429-440. 13. Yang, X., Toh, P.S. (1995): Adaptive fuzzy multilevel median lter. IEEE Trans. on Image Processing, 4(5), 680-682. 14. Taguchi, A., Kimura, T. (1996): Data-dependent ltering based on if-then rules and else rules. Proceedings of European Signal Processing Conference, VIII, 1713-1716. 15. Arakawa, K., Arakawa, Y. (1991): Digital signal processing using fuzzy clustering. IEICE Transactions, E 74(11), 3554-3558. 16. Arakawa, K., Arakawa, Y. (1993): Proposal of median-type fuzzy lter and its optimum design. Electronics and Communications in Japan: part 3, 76(7), 27-35. 17. Taguchi, A., Izawa, N. (1996): Fuzzy center weighted median lters. Proceedings of European Signal Processing, VIII, 1721-1724. 18. Russo, F. (1997): Nonlinear ltering of noisy images using neuro-fuzzy operators. Proceedings of the IEEE Conference on Image Processing, 1997. 19. Tsai, H-H., Yu, Pao-Ta (1999): Adaptive fuzzy hybrid multichannel lters for color image restoration. Proceedings of the 1999 IEEE Workshop on Nonlinear Signal and Image Processing, I, 134-138. 20. Tsai, H-H., Yu, Pao-Ta (1999): Adaptive fuzzy hybrid multichannel lters for removal of impulsive noise from color images. Signal Processing, 74(20, 127-152. 21. Kosko, B., (1992): Neural Networks and Fuzzy Systems: A Dynamic Systems Approach to Machine Intelligence. Prentice Hall, Englewood Cli s, N.J., USA. 22. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1996): Fuzzy adaptive lters for multichannel image processing. Signal Processing Journal, 55(1), 93{106. 23. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1996): Multichannel lters for image processing. Signal Processing: Image Communications, 9(2), 143-158. 24. Mendel, J.M. (1995): Fuzzy logic systems for engineering: A tutorial. Proceedings of the IEEE, 26(3), 345-377. 25. Bilgic, t., Turksen, I.B., (1996): Measurement of membership functions: Theoretical and empirical work. Technical report, Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada. 26. Zysno, P. (1981): Modelling membership functions. (Empirical Semantics, B. Rieger Editor), Brockmeyer, Bochum, Germany, 350-375. 27. Zimmerman, H.J., Zysno, P. (1996): Quantifying vagueness in decision models. European Journal of Operation Research, 22, 148-154. 28. H.J. Zimmermann, P. Zysno, Latent connectives in human decision making, Fuzzy Sets and Systems, vol. 4, pp. 37-51, 1980. 29. F.S. Roberts, F.S. (1979): Measurement Theory with Applications to DecisionMaking, Utility and the Social Sciences. Addison-Wesley, Reading, Massachusetts. 30. Zimmermann, H.J. (1987): Fuzzy Sets, Decision Making and Expert System. Kluwer Academic, Boston, Massachusetts. 31. Zadeh, L.A. (1965): Fuzzy sets. Information control, 8, 338-353. 32. Shepard R.N. (1981): Towards a universal law of generalization for psychological science Science, 237, 1317-1323. 33. Dombi, J. (1990): Membership function as an evaluation. Fuzzy Sets and Systems, 35, 1-21.

176

3. Adaptive Image Filters

34. Plataniotis, K.N., Androutsos, D., Sri, V., Venetsanopoulos, A.N. (1995): `A Nearest Neighbour Multichannel Filter,' Electronic Letters, 31, 1910-1911. 35. Plataniotis, K.N., Androutsos, D., Vinayagamoorthy, S., Venetsanopoulos, A.N. (1996): An adaptive nearest neighbor multichannel lter. IEEE Trans. on Circuits and Systems for Video Technology, 6(6), 699-703. 36. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1997): Contentbased colour image lters. Electronic Letters, 33(3), 202-203. 37. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1997): Color image lters: The vector directional appoach. Optical Engineering, 36(9), 2375-2383. 38. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1996): Color image processing using adaptive vector directional lters. IEEE Trans. on Circuits and Systems II: Analog and Digital Signal Processing, 45(10), 1414-1419. 39. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1996): An adaptive multichannel lters for color image processing. Canadian Journal of Electrical & Computer Engineering, 21(4), 149-152. 40. Grabisch, M., Nguyen, H.T., Walker, E.A. (1996): Fundamentals of Uncertainty Calculi with Applications to Fuzzy Inference. Kluwer Academic Publishers, Dordrecht. 41. Fodor, J., Marichal, J., Raibens, M. (1995): Characterization of the ordered weighted averaging operators. IEEE Trans. on Fuzzy Systems, 3(2), 231-240. 42. Trahanias, P.E., Venetsanopoulos, A.N. (1993): Vector directional lters. A new class of multichannel image processing lters. IEEE Trans. on Image Processing, 2, 528-534. 43. Trahanias, P.E., Karakos D., Venetsanopoulos, A.N. (1996): Directional processing of color images: theory and experimental results. IEEE Trans. on Image Processing, 5(6), 868-880. 44. Plataniotis, K.N., Androutsos, D., Vinayagamourthy, S., Venetsanopoulos, A.N. (1997): Color image processing using adaptive multichannel lters. IEEE Trans. on Image Processing, 6(7), 933-950. 45. Bickel, P.J. (1982): On adaptive estimation. Annals of Statistics, 10, 647-671. 46. Sage, A.P., Melsa, J.L. (1979): Estimation Theory with Applications to Communication and Control, R.E. Krieger Publishing Co., Huntington N.Y. 47. Box, G.E., Tiao, G.C. (1964): A note on criterion robustness and inference robustness. Biometrika, 51(2), 169-173. 48. Box, G.E., Tiao, G.C. (1973): Bayesian Inference in Statistical Analysis. Addison-Wesley publishing Co, Toronto, Canada. 49. Pan, W., Je s, B.D. (1995): Adaptive image restoration using a generalized Gaussian model for unknown noise. IEEE Trans. on Image Processing, 4(10) 1451-1456. 50. Plataniotis, K.N. (1994): Distributed Parallel Processing State Estimation Algorithms, Ph.D Dissertation, Florida Institute of Technology, Melbourne, Florida, USA. 51. Kim, H.M., Mendel, J.M., Fuzzy basis functions: Comparisons with other basis functions. IEEE Trans. on Fuzzy Systems, 3(2), 158-169. 52. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1998): Adaptive multichannel lters for color image processing. Signal Processing: Image Communications, 11(3), 1998. 53. Cacoullos, T. (1966): Estimation of a multivariate density. Annals of Statistical Mathematics, 18(2), 179-189. 54. Epanechnikov, V.K. (1969): Non-parametric estimation of a multivariate probability density. Theory Prob. Appl., 14, 153-158. 55. Fukunaga, K. (1990): Introduction to Statistical Pattern Recognition, Academic Press, Second Edition, London, UK.

References

177

56. Breiman, L., Meisel, W., Purcell, E. (1977): Variable kernel estimates of multivariate densities. Technometrics, 19(2), 135-144. 57. Rao, Prasaka B.L.S. (1983): Non-parametric functional estimation Academic Press, N.Y. 58. Nadaraya, E.A. (1964): On estimating regression. Theory Probab. Applic., 15, 134-137. 59. Watson, G.S. (1964): Smooth regression analysis. Sankhya Ser. A, 26, 359-372. 60. T.J. Wagner, T.J. (1975): Nonparametric estimates of probability density. IEEE Trans. on Information Theory, 21(4), 438-440. 61. Prat, W.K. (1991): Digital Image Processing. Second Edition, John Wiley, N.Y. 62. Fisher, N.I., Lewis, T., Embleton, B.J.J. (1993): Statistical Analysis of Spherical Data. Cambridge University Press, Paperback Edition, Cambridge. 63. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1998): Processing color images using vector directional lters: extensions and new results. Proceedings, Nonlinear Image Processing IX, 3304, 268-276. 64. Srinivasan, A. (1996): Computational issues in the solution of liquid crystalline polymer ow problems. Ph.D Dissertation, Department of Computer Science, University of California, Santa Barbara, CA. 65. Matheron, G. M. (1975): Random Sets of Integral Geometry. Wiley, New York, N.Y. 66. Serra, J. (1982): Image Analysis and Mathematical Morphology. Academic Press, London, U.K. 67. Sternberg, S.R. (1986): Greyscale morphology. Computer Vision, Graphics and Image Processing, 35: 333-355. 68. Serra, J. (1986): Introduction to mathematical morphology. Computer Vision, Graphics and Image Processing, 35: 283-305. 69. Smith, D. G. (1992): Fast Adaptive Video Processing: A Geometrical Approach. M.A.Sc. thesis, University of Toronto, Toronto, Canada. 70. Maragos, P.A. (1990): Morphological systems for multidimensional signal processing. Proceedings of the IEEE, 78(4): 690-709. 71. Serra, J. (1988): Image Analysis and Mathematical Morphology: Theoretical Advances. Academic Press, London, U.K. 72. Cheng, F., Venetsanopoulos, A.N. (1992): An adaptive morphological lter for image processing. IEEE Trans. on Image Processing, 1(4), 533-539. 73. Cheng, F., Venetsanopoulos, A.N. (1999): Adaptive morphological operators, fast algorithms and their applications. Pattern Recognition, forthcoming special issue on Mathematical Morphology and its applications. 74. Deng-Wong, P., Cheng, F., Venetsanopoulos, A.N. (1996): Adaptive morphological lters for color image enhancement. Journal of Intelligence and Robotic Systems, 15: 181-207. 75. Maragos, P. (1996): Di erential morphology and image processing. IEEE Trans. on Image Processing, 5(6), 922-937. 76. Astola, J., Haavisto, P., Neuvo, Y. (1990): Vector median lters. Proceedings of the IEEE, 78, 678-689. 77. Trahanias, P.E., Pitas, I., Venetsanopoulos, A.N. (1994): Color Image Processing. (Advances In 2D and 3D Digital Processing: Techniques and Applications, edited by C.T. Leondes), Academic Press, 67, 45-90. 78. Karakos, D., Trahanias, P.E. (1997): Generalized multichannel image ltering structures. IEEE Trans. on Image Processing, 6(7), 1038-1045. 79. Gabbouj, M., Cheickh, F.A. (1996): Vector median-vector directional hybrid lter for color image restorartion, Proceedings of the European Signal Processing Conference, VIII, 879-881.

178

3. Adaptive Image Filters

80. Poynton, C.A. (1996): A Technical Introduction to Digital Video. (http : ==www:inforamp:net=poynton=Poynton ; T ; I ; Digital ; V ideo:html), Prentice Hall, Toronto. 81. Engeldrum, P.G. (1995): A framework for image quality models. Journal of Imaging Science and Technology, 39(4), 312-318. 82. Narita, N. (1994): Consideration of subjective evaluation method for quality of image coding, Electronics and Communications in Japan: Part 3, 77(7) 84-97.

4. Color Edge Detection

4.1 Introduction One of the fundamental tasks in image processing is edge detection. High level image processing, such as object recognition, segmentation, image coding, and robot vision, depend on the accuracy of edge detection. Edges contain essential information about an image. Most edge detection techniques are based on nding maxima in the rst derivative of the image function or zero-crossings in the second derivative of the image function. This concept is illustrated for a gray-level image in Fig. 4.1 [4]. The gure shows that the rst derivative of the gray-level pro le is positive at the leading edge of a transition, negative at the trailing edge, and zero in homogeneous areas. The second derivative is positive for that part of the transition associated with the dark side of the edge, negative for that part of the transition associated with the light side of the edge, and zero in homogeneous areas. In a monochrome image an edge usually corresponds to object boundaries or changes in physical properties such as illumination or re ectance. This de nition is more elaborate in the case of color (multispectral) images since more detailed edge information is expected from color edge detection. According to psychological research on human visual system [1], [2], color plays a signi cant role in the perception of boundaries. Monochrome edge detection may not be sucient for certain applications since no edges will be detected in gray-level images when neighboring objects have di erent hues but equal intensities [3]. Objects with such boundaries are treated as one big object in the scene. Since the capability of distinguishing between di erent objects is crucial for applications such as object recognition and image segmentation, the additional boundary information provided by color is of paramount importance. Color edge detection also outperforms monochrome edge detection in low contrast images [3]. There is thus a strong motivation to develop ecient color edge detectors that provide high quality edge maps. Despite the relatively short period of time, numerous approaches of di erent complexities to color edge detection have been proposed. It is important to identify their strength and weaknesses in choosing the best edge detector for an application. In this chapter particular emphasis will be given to color edge detectors based on vector order statistics. If the color image is considered as three dimensional vector space, a color edge can be de ned as a signi cant

180

4. Color Edge Detection

Image

Profile of a horizontal line

First derivative

Second derivative

Fig. 4.1. Edge detection by derivative operators discontinuity in the vector eld representing the color image function. An abrupt change in the vector eld characterizes a color step edge, whereas a gradual change characterizes a color ramp edge. It should be noted that the above de nitions are not intended as formal de nitions that can lead to edge detectors. Rather they are intuitive descriptions of the notion of color edges in order to facilitate discussion on order statistics based edge detectors. Edge detectors based on order statistics operate by detecting local minima and maxima in the color image function and combining them in an appropriate way in order to produce a positive response for an edge pixel. Since there is no unique way to de ne order for multivariable signals, such as color images, the reduced ordering (R-ordering) scheme discussed in Chap. 2 will be used to sort vector samples. A class of color edge detectors will then be de ned using linear combinations of the sorted vector samples. The minimum over the magnitudes of these linear combinations de nes this class of edge operators. Di erent coecients in the linear combinations result in di erent edge detectors that vary in simplicity and in eciency. The major performance issues concerning edge detectors are their ability to extract edges accurately, their robustness to noise, and their computational eciency. In order to provide a fair assessment, it is necessary to have a set of e ective performance evaluation methods. Though numerous evaluation methods for edge detection have been proposed, there has not been any standardized method. In image processing, the evaluation methods can usually be categorized into objective and subjective evaluation. While ob-

4.2 Overview Of Color Edge Detection Methodology

181

jective evaluation can provide analytical data for comparison purpose, it is not sucient to represent the complexity of human visual systems. In most image processing applications, human evaluation is the nal step, as noted by Clinque [7]. The subjective evaluation, which takes into account the human perception, seems to be very attractive in this perspective. The visual assessment method proposed by Heath [8], [9] is entirely based on subjective evaluation. In this chapter, both types of evaluation methods are utilized for comparing various edge detectors. The chapter is organized as follows: An overview of the methodology for color edge detection is presented rst. Early approaches extended from monochrome edge detection, as well as the more recent approaches of vector space are addressed. The edge detectors illustrated in this section, among others, are the Sobel operator [3], [10], Laplacian operator [3], Mexican Hat operator [3], [11], Vector Gradient operator [12], Directional operator [13], Entropy operator [14], and Cumani operator [15]. In the sequence, two families of vector based edge detection operators, Vector Order Statistic operators [16] [17] and Di erence Vector operators [25] are studied in detail. A variety of edge detectors obtained as special cases of the two families are introduced and their performances are evaluated. Evaluation results from both objective and subjective tests as well as conclusion from the tests performed are also listed here.

4.2 Overview Of Color Edge Detection Methodology

4.2.1 Techniques Extended From Monochrome Edge Detection

In a monochrome image, an edge is de ned as an intensity discontinuity. In the case of color images, the additional variation in color also needs to be considered. Early approaches to color edge detection comprise extensions from monochrome edge detection. These techniques are applied to the three color channels independently and then the results are combined using a certain logical operation [3]. Sobel operator. The rst derivative at any point in an image is obtained using the magnitude of the gradient at that point. This can be done using various operators including the Sobel, Prewitt, and Roberts operators [4],[5]. The Sobel operators have the advantage of providing both a di erencing and a smoothing e ect. Because derivatives enhance noise, the smoothing e ect is a particularly attractive feature of the Sobel operators. A number of edge detectors including the Sobel operator are compared in [3]. The Sobel operator is implemented by convolving a pixel and its eight neighbors with the following two 3  3 convolution masks [4], [5]: 0 1 0 ;1 1 01 2 11 Mx = @ 2 0 ;2 A ; My = @ 0 0 0 A (4.1) 1 0 ;1 ;1 ;2 ;1

182

4. Color Edge Detection

The two masks are applied to each color channel independently and the sum of the squared convolution results states an approximation of the magnitude of the gradient in each channel. A pixel is regarded as an edge point if the mean of the gradient magnitude values in the three color channels exceed a given threshold. According to [3] the Sobel operator produces very thick edges that have to be thinned. Laplacian operator. The second derivative at any point in an image is obtained by using the Laplacian operator. The basic requirement in de ning the Laplacian operator is that the coecient associated with the center pixel be positive and the coecients associated with the outer pixels be negative [4]. The sum of the coecients has to be zero. An eight-neighbor Laplacian operator can be de ned using the following convolution mask: 0 10 22 10 1 M = @ 22 128 22 A (4.2) 10 22 10 The Laplacian mask is applied to the three color channels independently and the edge points are located by thresholding the maximum gradient magnitude. The methodology is simple, easy to implement and is very successful in located edges. However, there are problems when the Laplacian methodology is applied to color images. First, many of the Laplacian zero crossings are spurious edges which really correspond to local minima in gradient magnitude. It is well known that zero crossings in a second order derivative indicates an extremum in the rst order derivative but not necessary a local maximum. To improve the performance of the Laplacian operator and di erentiate between global and local minima, the sign of the third derivative may have to be examined. Performance, however, can be hampered due to the noise usually corrupting real images. Since di erentiation ampli es noise acting like a high pass lter, the Laplacian zero crossings for an image may have numerous false edges caused by noise. It is therefore recommended that a smoothing operator is applied to the image prior to the detection module. Mexican Hat operator. Another group of edge detectors commonly used in monochrome edge detection is based on second derivative operators and they can also be extended to color edge detection in the same way. A second derivative method can be implemented based on the above operator. The Mexican Hat operator uses convolution masks generated based on the negative Laplacian derivative of the Gaussian distribution: 2 2 2 2 2 ;r2 G(x; y) = x +2y;6 2 exp(; x 2+2y ) (4.3) Edge points are located if zero-crossing occurs in any color channel. The gradient operators proposed for gray scale images [26] can also be extended to color images by taking the vector sum of the gradients for individual components [12], [14]. Similar to Sobel and Laplacian operators, the gradient

4.2 Overview Of Color Edge Detection Methodology

183

operator also employs rst derivative-like mask patterns. Other approaches consider performing operations in alternative color spaces. The Hueckel edge operator [27] operates in the luminance, chrominance color space. The edges in the three color components are also assumed to be independent under the constraints that they must have the same orientation. In studying the application of the compass gradient edge detection method to color images, Robinson [28] also utilized di erent color coordinates. One common problem with the above approaches is that they fail to take into account the correlation among the color channels, and as a result, they are not able to extract certain crucial information conveyed by color. For example, they tend to miss edges that have the same strength but are in opposite direction in two of their color components. Consequently, the approach to treat the color image as vector space has been proposed.

4.2.2 Vector Space Approaches Various approaches proposed consider the problem of color edge detection in vector space. Color images can be viewed as a two-dimensional threechannel vector eld [29] which can be characterized by a discrete integer function f (x; y). The value of this function at each point is de ned by a three dimensional vector in a given color space. In the RGB color space, the function can be written as f (x; y) = (R(x; y); G(x; y); B (x; y)); where (x,y) refers to the spatial dimensions in the 2-D plane. Most existing edge detection algorithms use either rst or second di erences between neighboring pixels for edge detection. A signi cant change gives rise to a peak in the rst derivative and zero-crossing in the second derivative, both of which can be identi ed fairly easily. Some of these operators are considered here. Vector gradient operators. The vector gradient operator employs the concept of gradient operator with modi cations such that instead of a scalar space the operator performs on a two-dimensional three channel color vector space. There are several ways of implementing the vector gradient operator [12]. One simple approach is to employ a (3x3) window centered on each pixel and then obtain eight distance values, (D1 ; D2 ; : : : ; D8), by computing the Euclidean distance between the center vector and its eight neighboring vectors. The vector gradient (g) is then chosen as: g = max(D1 ; D2 ; : : : ; D8 ) (4.4) Another approach employs directional operators . Let the image be a vector function f (x; y) = (R(x; y); G(x; y); B (x; y)), and let r; g; b be the unit vectors along the R; G; B axes, respectively. The horizontal and vertical directional operators can be de ned as:

@G @B u = @R @x r + @x g + @x b

(4.5)

184

4. Color Edge Detection

@G g + @B b r + v = @R @y @y @y

(4.6)

@G 2 @B 2 2 gxx = u  u = j @R @x j + j @x j + j @x j @G 2 @B 2 2 gyy = v  v = j @R @y j + j @y j + j @y j @R @G @G @B @B gxy = @R @x @y + @x @y + @x @y

(4.7) (4.8) (4.9)

Then the maximum rate of change of f and the direction of the maximum contrast can be calculated as: (4.10)  = 12 arctan g 2g;xyg xx

yy

F () = 21 f(gxx + gyy ) + cos2(gxx ; gyy ) + 2gxy sing

p

(4.11)

The edges can be obtained by thresholding F (). The image derivatives along the x and y directions can be computed by convolving the the vector function f with two spatial masks as follows:

2 ;1 0 1 3 2 @fi ' 1 4 ;1 0 1 5  f ; @fi ' 1 4 i @x 6 @y 6

3

1 1 1 0 0 0 5  fi (4.12) ;1 0 1 ;1 ;1 ;1 Unlike the gradient operator extended from monochrome edge detection, the vector gradient operator can extract more color information from the image because it considers the vector nature of the color image. On the other hand, the vector gradient operator is very sensitive to small texture variations [17]. This may be undesirable in some cases since it can cause confusion in identifying the real objects. The operator is also sensitive to Gaussian and impulse noise. Directional operators. The direction of an edge in color images can be utilized in a variety of image analysis tasks [18]. A class of directional vector operators was proposed to detect the location and orientation of edges in color images [13]. In this approach, a color c(r; g; b) is represented by a vector c in color space. Similar to the well known Prewitt operator [20] shown below, 0 ;1 0 1 1 0 ;1 ;1 ;1 1 1 1 (4.13) H = 3 @ ;1 0 1 A ; V = 3 @ 0 0 0 A ;1 0 1 1 1 1 the row and column directional operators (i.e. in the horizontal and vertical directions), each have one positive and one negative component. For operators of size (2w + 1)  (2w + 1) the con guration is the following:

4.2 Overview Of Color Edge Detection Methodology

2V 3 ;   H = H; 0 H+ , V = 4 0 5

185

(4.14)

V+

where the parameter w is a positive integer. These positive and negative components are convolution kernels, denoted by V ; , V + , H ; and H + , whose outputs are vectors corresponding to the local average colors. In order to estimate the color gradient at the pixel (xo ; yo), the outputs of these components are calculated as follows: y=yo +w x=xo +w H +(xo ; yo) = w(2w1 + 1) X X c(x; y) y=yo ;w x=xo +1 y=yo +w x=xo ;w H ;(xo ; yo) = w(2w1 + 1) X X c(x; y)

V + (xo; yo) = w(2w1 + 1) V ;(xo ; yo) = w(2w1 + 1)

y=yo ;w x=xo ;1 y=X yo +w x=X xo +w y=yo +1 x=xo ;w y=X yo ;w x=X xo +w y=yo ;1 x=xo ;w

c(x; y) c(x; y)

(4.15)

where c(x; y) denotes the RGB color vector (r; g; b) at the image location (x; y). Local colors and local statistics a ect the output of the operator components (V+ (x; y), V; (x; y), H+ (x; y) and H;(x; y)). In order to estimate the local variation in the vertical and horizontal directions, the following vector di erences are calculated: H (xo ; yo) = H + (xo; yo) ; H ;(xo ; yo) V (xo ; yo) = V +(xo; yo) ; V ;(xo ; yo) (4.16) The scalars jjH (xo ; yo)jj and jjV (xo ; yo )jj give the variation rate at (xo ; yo ) in orthogonal directions (i.e. are the amounts of color contrast in the horizontal and vertical directions). The local changes in the color channels (i.e. R, G and B ) can not be combined properly by simply adding the R, G and B components of H and V . This approach leads to a mutual cancel out e ect in several situations (e.g. when contrast is in phase opposition in di erent channels). Instead, the local changes in R, G and B are assumed to be independent (i.e. orthogonal), and the intensity of the local color contrast is obtained as the magnitude of the resultant vector in the RGB space (using the Euclidean norm), as shown in (4.17) and (4.18). Therefore, the magnitude B of the maximum variation rate at (xo ; yo ) is estimated as the magnitude of the resultant vector: p (4.17) B (xo ; yo) = jjV (xo ; yo )jj2 + jjH (xo ; yo )jj2

186

4. Color Edge Detection

and the direction  of the maximum variation rate at (xo ; yo ) is estimated as: V 0 (xo ; yo ) ) + k (4.18)  = arctan( H 0 (x ; y ) o o where k is an integer and:  jjV (x ; y )jj if jjV (x ; y )jj  jjV (x ; y )jj 0 + o o ; o o V (xo ; yo ) = ;jjV (ox ;oy )jj otherwise o o  H (x ; y )jj if jjH (x ; y )jj  jjH (x ; y )jj o o + o o ; o o H 0 (xo ; yo ) = jj;jj H (xo ; yo) otherwisejj where jj:jj denotes the Euclidean norm. In this formulation, the color contrast has no sign. In order to obtain the direction of maximal contrast, a convention is adopted to attribute signs to the quantities V 0 (xo ; yo) and H 0 (xo ; yo ) in (4.18). These quantities are considered positive if the luminance increases in the positive directions of the image coordinate system. The luminance quantities are estimated here by the norms: jjH + jj, jjH ; jj, jjV + jj and jjV ; jj. Typically the luminance has been estimated by the luminance p quantity, using the norm jjcjj1 = r + g + b. However, the norm jjcjj2 = r2 + g2 + b2 also has been used to estimate luminance [21]. Another possibility would be to consider the local color contrast with respect to a reference (e.g. the central portion of the operator co ), instead of the luminance quantity. However, this last possibility could present some ambiguities. For example, in vertical ramp edges jjH ; ; co jj = jjH + ; co jj, then H 0 (xo ; yo) would have positive sign, irrespective to the actual sign of the ramp slope [13]. Note the similarity between the color gradient formulated above, and a Prewitt-type (2w +1)  (2w +1) monochromatic gradient [20]. The larger the parameter w, the smaller the operator sensitivity to noise, and also to sharp edges. This happens because there is a smoothing (low pass) e ect associated with the convolution mask. Therefore, the larger the size of the convolution mask, the stronger the low pass e ect, and the less sensitive to high spatial frequencies becomes the operator. Also note that H ; ; H + ; V ; and V + are in fact convolution masks and could easily implement the latest vector order statistics ltering approaches. Compound edge detectors. The simple color gradient operator can also be used to implement compound gradient operators [13]. A well known example of a compound operator is the derivative of a Gaussian (G) operator [20]. In this case, each channel of the color image is initially convolved with a Gaussian smoothing function G(x; y; ), where  is the standard deviation, and, then, this gradient operator is applied to the smoothed color image to detect edges. Torre and Poggio [22] stated that di erential operations on sampled images require the image to be rst smoothed by ltering. The Gaussian ltering has the advantage that guarantees the bound-limitedness of the signal, so the derivative exists everywhere. This is equivalent to regularizing the signal using a low pass lter prior to the di erentiation step.

4.2 Overview Of Color Edge Detection Methodology

187

The low pass ltering (regularization) step is done by the convolution of a Gaussian G(x; y; ) with the image signal. In a multi-spectral image, each pixel is associated with a vector c(x; y) whose components are denoted by ci (x; y) where i = 1; 2; 3. This convolution is expressed as follows: G(x; y; ) I = G(x; y; ) Ii , 8 i (4.19) where I and Ii denote the image itself and the image component i. The image edges are then detected using the operator described before and at each pixel the edge orientation (x; y) and magnitude B (x; y) are obtained. The ltering operation introduces an arbitrary parameter, the scale of the lter, e.g., the standard deviation for the Gaussian lter. A number of authors have discussed the relationship existing between multiresolution analysis, Gaussian ltering and zero-crossings of ltered signals [22] [23]. The actual edge locations are detected by computing the zero-crossings of the second-order di erences image, obtained by applying rst-order difference operators twice. Once the zero-crossings are found, they still must be tested for maximality of contrast. Let the zero-crossing image elements denote ZC (x; y). In practice, the image components are only known at the nodes of a rectangular grid of sampling points (x; y), and the zero-crossing condition ZC (x; y) = 0 often does not apply. The simple use of local minima conditions leaves a margin for uncertainty. The zero-crossing image locations can be located by identifying how the sign of ZC (x; y) varies in the direction of maximal contrast, near the zero-crossing location [15]. Therefore, the condition ZC (x; y) = 0 must then be substituted by the more practical condition: ZC (xi ; yi ):ZC (xj ; yj ) < 0 (4.20) where the sampling points (xi ; yi ) and (xj ; yj ) are 8-adjacent, and the derivatives required for the computation of ZC (x; y) are approximated by convolutions with the masks proposed by Beaudet [24]. Notice that (xi ; yi ) and (xj ; yj ) are in the direction of maximal contrast calculated at (xo ; yo), the center of the 8-adjacent neighborhood. In order to improve the spatial location (mostly with larger operator sizes w), a local minimum condition is also used (i.e. jZC (xo ; yo )j <  ,  ' 0). With the compound detector, the Gaussian noise can be reduced due to the Gaussian smoothing function. Though this operator improves performance in Gaussian noise, it is still sensitive to impulse noise. Entropy operator. The entropy operator is employed for both monochrome and color images. It yields a small value when the color chromaticity in the local region is uniform and a large value when there are drastic change in the color chromaticity. The entropy in a processing window (i.e., 3x3) centered on vector vo = (ro ; go; bo) is de ned as: H = qR HR + qG HG + qB HB (4.21)

188

4. Color Edge Detection

where HR ; HG ; HB denote the entropies in the R; G; B directions, respectively, and:

(4.22) qR = r + rgo + b ; qG = r + ggo + b ; qB = r + gbo + b o o o o o o o o o Let Xo ; X1; : : : ; XN ; (X = R; G; B ) denote the values in each corresponding channel inside the processing window then HX is de ned as: PN p log(p ) HX = ; i=1logX(i N ) Xi (4.23)

where,

pXi = PNXi

(4.24)

j =1 Xj

Edges can be extracted by detecting the change of entropy H in a window region. Since the presence of noise can disturb the local chromaticity in an image, the entropy operator is sensitive to noise[17]. Second derivative operators. A more sophisticated approach which involves a second derivative operator is suggested by Cumani. Given a vector eld f (x; y) for a color image, the squared local contrast of f at point P = (x; y) in the direction of the unit vector n(n1 ; n2 ) is de ned as: S (P; n) = En21 + Fn1 n2 + Gn22 (4.25) where @ f  @ f = @R @R + @G @G + @B @B (4.26) E = @x @x @x @x @x @x @x @x

@ f  @ f = @R @R + @G @G + @B @B F = @x @y @x @y @x @y @x @y @ f  @ f = @R @R + @G @G + @B @B E = @y @y @y @y @y @y @y @y

E F 

(4.27) (4.28)

The eigenvalues of the 2x2 matrix F G coincide with the extreme values of S(P,n) and are attained when n is the corresponding eigenvector. The extreme values are: p 2 2  = E + G  (E2 ; G) + 4F (4.29) and the two corresponding eigenvectors n+ and n; are given as

4.3 Vector Order Statistic Edge Operators

n = 8(cos; sin) >

if (E-G)=0 and F>0 if (E-G)=0 and F < 4 4 + = > ; : unde ned 1 arctan 2F 2

; = +  2

189

(4.30) (4.31)

(4.32) Possible edge point are considered as point P where the rst directional derivative Ds (P; n) of maximal squared contrast + (P ) is zero in the direction of maximal contrast n+ (P ). The directional derivative is de ned as: Ds (P; n) = r+  n+ @+ + = @ (4.33) @x n1 + @y n2

= Ex n31 + (2Fx + Ey )n21 n2 + (Gx + 2Fy )n1 n22 + Gy n32 The edge points are determined by computing zero-crossings of Ds (P; n). Since the local directional contrast needs to be a maximum or minimum, the sign of Ds along a curve tangent at P in the direction of n+ is checked and the edge point is located if it is found to be a maximal point. The ambiguity of the gradient direction in the above method causes some diculties in locating edge points. A subpixel technique with bilinear interpolation can be employed to solve the problem. A modi cation in solving the ambiguities by estimating the eigenvector n+ , which can avoid the computational costly subpixel approximation was suggested in [30]. Other techniques [3] have also been proposed to improve the performance of the Cumani operator and reduce its complexity. The proposed operator utilizes di erent sized convolution masks based on the derivatives of the Gaussian distribution in the computation process instead of the set of xed-sized 3  3 masks. It was argued in [3] that a considerable increase in the quality of the results can be obtained when the Gaussian masks were employed. Similar to the vector gradient operator, the second-order derivative operator is very sensitive to texture variations and impulsive noise, but it produces thinner edges. The regularizing lter applied in this operator causes a certain amount of blurness in the edge map.

4.3 Vector Order Statistic Edge Operators As seen in Chap. 2, order statistics based operators play an important role in image processing [31], [32]. Order statistics operators had been used extensively in monochrome (gray scale) as well as color image edge detection. This approach is inspired by the morphological edge detectors [33], [34] that have been proposed for monochrome images. This class of color edge detectors is characterized by linear combinations of the sorted vector samples. Di erent

190

4. Color Edge Detection

sets of coecients of the linear combination give rise to di erent edge detectors that vary in performance and eciency. The primary step in order statistics is to arrange a set of random variables in ascending order according to certain criteria. However, as described in Chap. 2, there is no universal way to order the color vectors in the di erent color spaces. The di erent ordering schemes discussed in this book can be used to de ne order statistic edge operators. Let the color image vectors in a window W denote xi , i = 1; 2; :::; n and D(xi ; xj ) be a measure of distance (or similarity) between the color vectors xi and xj . The vector range edge detector (VR) is the simplest color edge detector based on order statistics. It expresses the deviation of the vector outlier in the highest rank from the vector median in W as follows: V R = D(x(n) ; x(1) ) (4.34) VR expresses in a quantitative way the deviation of the vector outlier in the highest rank from the vector median x(1) . Consequently, VR is small in a uniform area where all vectors are closed together, and it returns a large output when discontinuities exist. Its response on an edge will be large since x(n) will be selected among the vectors from the one side of the edge while x(1) will be selected among the vectors from the other side of the edge. The actual color edges can be obtained by thresholding the VR outputs. The VR detector, though simple and ecient, is sensitive to noise, especially to impulse noise. It will respond to a noise pixel at the center of W with (n) pixels. To alleviate this drawback, dispersion measures which are known as more robust estimates in the presence of noise should be considered. To this end, a class of edge detectors can be de ned as a linear combination of the ordered image vectors. This class of operators expresses a measure of the dispersion of the ordered vectors. Therefore, the vector dispersion edge detector (VDED), can be de ned as follows:

V DED = k

n X i=1

i x(i) k

(4.35)

where k  k denotes the appropriate norm. Note that VR is a special case of VDED with 1 = ;1; n = 1; and i = 0; i = 2; : : : ; n ; 1: The above equation can be further generalized by employing k sets of coecients and combining the resulting vector magnitudes in an appropriate way. The combination proposed in [17] employs a minimum operator which attenuates the e ect of the noise. According to the de nition the general class of vector dispersion edge detectors can be written as:

GV DED = minj fk

n X i=1

ij x(i) kg; j = 1; 2;   ; k

(4.36)

4.3 Vector Order Statistic Edge Operators

191

Speci c color edge detectors can be obtained from (4.36) by selecting the set of coecients ij . One member of the GVDED family is the minimum vector dispersion detector (MVD), and it's de ned as:

MV D = minj fD(x(n;j+1) ;

Xl x(i)

l )g; j = 1; 2; : : : ; k; k; l < n (4.37) The choice of k and l depend on n, the size of W . These two parameters i=1

control the trade-o between complexity and noise attenuation. This more computationally involved operator can improve edge detection performance in the presence of both impulse and Gaussian noise. It can eliminate up to k ; 1 impulse noise pixels in W [17]. Let there be k ; 1 impulse noise pixels in a window of n pixels. By their nature, impulse noise di er from the rest of the pixels by a large amount. Therefore, after ordering, the impulse noise pixels have the highest ranks: x(n;k+2) ; x(n;k+3) ; : : : ; x(n) . Since the distance between these noise pixels and the rest of the pixels are large, (4.37) can be reduced to:

MV D = D(x(n;k+1) ;

Xl x(i) i=1

l )

(4.38)

Notice that none of the noise pixels appears at this equation, and thus would not a ect the edge detection process. MVD has improved noise performance since it is robust to the presence of heavy tailed noise, due to the minimum operation, and short tailed noise due to the averaging operation. A statistical analysis of MVD must be carried out in order to determine the error probability of the edge detector. The analysis is con ned to the case of additive, multivariate normal (Gaussian) distribution. An ideal edge model will be considered in this analysis. According to the model, the sample vectors Xi are on the one side of the edge as instances of a random variable X which follows a multivariate Gaussian distribution with known mean x and unit covariance. Similarly, the sample vectors Yi on the other side are instances of the random variable Y which follows a multivariate Gaussian distribution with known mean y and unit covariance. Then, the error probability is given as: PE = Pe PM + Pn PF (4.39) where Pe and Pn denote the prior probabilities of `edge' and `no edge', respectively, and PM and PF are the probabilities of missing an edge and false edge alarm, respectively. If X^ is the mean of the vectors Xi , then PM can be calculated as: PM = PrfminkYi ; X^ k < tjky ; x k:tg (4.40) Denoting with d(i) the sorted distance kYi ; X^ k, it can be claimed that d(1) = minkYi ; X^ k. Furthermore, de ning ky ; x k =  (4.40) can be rewritten as:

192

4. Color Edge Detection

PM = Prfd(1) ;  < t0 jt0 < 0g; t0 = t ;  Prfd(1) < t0 ; t0 < 0g PM = Prft0 < 0g F (t0 ) (4.41) PM = Prdft0 < 0g ; t0 < 0 where d(1) = d(1) ;  . Carrying out similar computations, PF is given as: F (t0 ) PF = 1 ; Prdft0 < 0g ; t0 0 (4.42) It can easily be seen that Pe = Prft0 < 0g, Pn = Prft0 0g and consequently: PE = Fd (t0 )u(;t0 ) + Prft0 0g ; Fd (t0 )u(t0 ) (4.43) where u(x) is the unit step and Fd (t0 ) = Fd (t) since Fd (t0 ) = Prfd(1) t0 g = Prfd(1) tg with Fd (x) = 1 ; [1 ; Fd (x)]p (4.44) obtained from Fd , the distribution function of d, and p the number of samples (1)

(1)

(1)

(1)

(1)

(1)

(1)

(1)

distances. In order to complete the calculations, the distribution function should be determined. Assuming Euclidean distances, d2 follows a non central chisquare distribution with m degrees of freedom and non centrality parameter s = (y ; x )T (y ; x ) [17]. The cumulative distribution function of the non-central chi-square distribution when z = m2 is an integer can be expressed in terms of the generalized Q function as Fd (y) = 1 ; Qz (s; py) [41]. Since the Euclidean distances are non-negative, Fd can be obtained by a simple change in variables: Fd (y) = Prfdyg = Prfd2 y2 g = Fd (y2 ) = 1 ; Qz (s; y) (4.45) From (4.45), (4.44) and (4.43) can be computed provided that Prft0 0g is known. For the model assumed in this analysis, t0 = t ;  where t is the detector's threshold and  = ky ; x k. Given that t is a deterministic quantity and  is a constant, t0 is also a deterministic quantity and Prft0 0g is unit or zero for t0 0 or t0 0, respectively. An alternative design of the GVDED operators utilizes the adaptive nearest-neighbor lter [38], [39]. The coecients are chosen to adapt to local image characteristics. Instead of constants, the coecients are determined by an adaptive weight function for each window W . The operator is de ned as the distance between the outlier and the weighted sum of all the ranked vectors: 2

2

NNV R = D(x(n) ;

n X i=1

wi x(i) )

(4.46)

4.3 Vector Order Statistic Edge Operators

193

The weight function wi is determined adaptively using transformations of a distance criterion at each image location and it is not uniquely de ned. There are two constraints on the weight function:  Each weight coecient is positive, wPi n 0  The weight function is normalized, i=1 wi = 1 Since the operator should also attenuate noise, it is important to assign a small weight to the pixels with high ranks (i.e., outliers). A possible weight function can be de ned as follows:

d ;d wi = n  d (n;) Pn(i) d (4.47) (n) j =1 (j ) where d(i) is the aggregated distance associated with vector xi inside the processing window W .

One special case for this weight function occurs in highly uniform area where all pixels have the same distance. The above weight function cannot be used since the denominator is zero. Since no edge exists in this area, the di erence measure NNVR is set to zero. The MVD operator can also be incorporated with the NNVR operator to further improve its performance in the presence of impulse noise as follows:

NNMV D = minj fD(x(n;j+1) ;

n X i=1

wi x(i) )g;

(4.48)

j = 1; 2; : : :; k; k < n A nal annotation on the class of vector order statistic operators concerns the distance measure D(xi ; xj ). By convention, the Euclidean distance measure (L2 norm) is adopted. The use of L1 norm is also considered because it reduces the computational complexity by computing the absolute values instead of squares and square root, without any notable deviation in performance. A few other distance measures are also considered in the attempt to locate an optimal measure, namely: the Canberra Metric implementation; the Czekanowski coecient; and the angular distance measure. Their performances will be addressed later. The Canberra Metric implementation [36] is de ned as:

D(xi ; xj ) =

m jx ; x j X i;k j;k k=1 xi;k + xj;k

The Czekanowski coecient [36] is de ned as: Pm min(x + x ) 2 i;k j;k D(xi ; xj ) = 1 ; Pk=1 m (x + x ) i;k j;k k=1

(4.49)

(4.50)

194

4. Color Edge Detection

where the m is the number of vector components, in the case of a color image, m = 3, corresponding to the three channels (R; G; B ). In addition, the angular distance measure of [16] can be used:

D(xi ; xj ) = arccos( kxxikk xxj k ) i

j

(4.51)

where k  k denotes the magnitude of the color vector xi . Based on these three distance measures, a variety of color edge detectors can be established.

4.4 Di erence Vector Operators The class of di erence vector operators (DV) can be viewed as rst derivativelike operators. This group of operators is extremely e ective from the point of view of the computational aspects of the human visual system. In this approach, each pixel represents a vector in the RGB color space, and a gradient is obtained in each of the four possible directions (0o ; 45o; 90o; 135o) by applying convolution kernels to the pixel window. Then, threshold can be applied to the maximum gradient vector to locate edges. The gradients are de ned as:

jrf j0o = ky0o ; x0o k (4.52) jrf j90o = ky90o ; x90o k (4.53) o o o jrf j45 = ky45 ; x45 k (4.54) jrf j135o = ky135o ; x135o k (4.55) DV = maxfjrf j0o ; jrf j90o ; jrf j45o ; jrf j135o g (4.56) where k  k denotes L2 norm, and x and y are three dimensional vectors used

as convolution kernels. The variation in the de nitions of these convolution kernels give rise to a number of operators. Fig. 4.2 shows the partition of the pixel window into two sub-windows within which each convolution kernel is calculated in all four directions. The basic operator of this group employs a (3  3) window involving a center pixel and eight neighboring pixels. Let v(x; y) denote a pixel. The convolution kernels for the center pixel v(x0 ; y0 ) in all four directions are de ned as:

X 0o = v(x;1 ; y0); Y 0o = v(x1 ; y0) X 45o = v(x;1 ; y1); Y 45o = v(x1 ; y;1) X 90o = v(x0 ; y;1); Y 90o = v(x0 ; y1) X 135o = v(x1 ; y1); Y 135o = v(x;1 ; y;1)

(4.57) (4.58) (4.59) (4.60)

4.4 Di erence Vector Operators

195

y90o y 0o

x0o

x90o

@@

@

y45o

y135o

@@ @@ @

;; ; ;

;x ; ;

x45o

135o

;

Fig. 4.2. Sub-window Con gurations This operator requires the least amount of computation among the edge detectors considered so far. However, as with the VR operator, DV is also sensitive to impulsive and Gaussian noise [25]. As a result, more complex operators with sub- ltering are designed. A larger window size is needed in this case to allow more data for processing. Although there is no upper limit on the size of the window, usually a (5x5) window is preferred since the computational complexity is directly linked to the size of the window. In addition, when the window becomes too large it can no longer represent the characteristics of the local region. For a (n  n) window (n = 2k + 1; k = 2; 3; : : :), the number of pixels in each of the subwindows illustrated in Fig. 4.2 is N = n 2;1 . A lter function can be applied to these N pixels in each sub-window to obtain the respective convolution kernels: 2

X do = f (vdsubo;1 ; vdsubo ;2 ; : : : ; vdsubo;N ) 1

1

1

Ydo = f (vdsubo;12 ; vdsubo;22 ; : : : ; vdsubo;N2 )

(4.61) (4.62)

where d = 0; 45; 90; 135. Depending on the type of noise one wishes to attenuate, di erent lters can be utilized. Four types of non-linear image lters based on order statistics are employed in our work. The rst type of color edge detector incorporates the vector median lter [37]. This lter outputs the vector median of the N vector samples in the

196

4. Color Edge Detection

sub-window by using the concept of vector order statistics introduced above, where the N vector samples are arranged in ascending order using R-ordering, v(1)  v(2)  : : : v(N ) , and the vector with the lowest rank, v(1), is the vector median. This operator can be made more ecient by locating the vector with the minimum reduced distance calculated in R-ordering instead of ordering all (N ) samples since only the vector median is of importance here. Vector median was discussed in detail in Chap. 3. Based on the analysis presented there, the vector median lter is very e ective for reducing impulse noise because it can reject up to (N ; 1) impulse noise pixels in a sub-window. However, since only the median vectors are used for edge detection, some edges may be rejected as noise and not able to be detected. The second type of lter is the arithmetic (linear) mean lter (hereafter fV M ). This lter reduces the e ect of Gaussian noise by averaging all the vector samples:

fV M (v1 ; v2 ; : : : ; vN ) = N1

N X i=1

vi

(4.63)

Due to the simplicity of the averaging operation, the vector mean operator is much more ecient than the vector median operator. The vector mean operator may cause certain false edges since the pixels used for edge detection are no longer the original pixels. The third type of lter, -trimmed mean lter, is a compromise between the above two lters. It is de ned as: 1 f ;trim (v1 ; v2 ; : : : ; vN ) = N (1 ; 2 )

N (1X ;2 ) i=1

v(i)

(4.64)

where is in the range [0, 0.5). When is 0, no vector is rejected and the lter reduces to a vector mean lter. When is 0.5, all vectors except vector median are rejected and the lter reduces to a vector median lter. For other values, this operator can reject 200 % of impulse noise pixels and it outputs the average of the remaining vector samples. Therefore the -trimmed mean lter can improve noise performance in the presence of both Gaussian and impulse noise. The last type of lter to be addressed is the adaptive nearest-neighbor lter [38]. The output of this lter is a weighted vector sum with a weight function that varies adaptively for each sub-window:

fadap (v1 ; v2 ; : : : ; vN ) =

N X i=1

wi vi

(4.65)

where the weight function wi was given in (4.47), and it assigns a higher weight to the vectors with lower ranks and a lower weight to the outliers. This lter is also e ective with mixed Gaussian and impulse noise and it

4.5 Evaluation Procedures and Results

197

bears approximately the same complexity as the -trimmed mean lter since they both need to perform the R-ordering. Again since edge detection is performed on the outputs of the lter instead of the original pixels, there may be a reduction in resulting edge qualities. Another group of operators denotes a similar concept as the sub- ltering operators where pre- ltering is used instead. Any one of the above lters can be used to perform pre- ltering on an image with a (3  3) window, and then the DV operator with the same window size is used for edge detection. Unlike the previous group, in this family the pixel window is not divided into sub-windows during ltering, and the lter is applied only once to the whole window. The advantage with this group of operators is that it is considerably more ecient than the previous group since the ltering operation, which accounts for most of the complexity, is performed only once instead of eight times (two for each of the four directions) for each pixel. One last proposed variation for the di erence vector operators considers edge detection in only two directions, horizontal and vertical, instead of four directions: DV hv = maxfjrf j0o ; jrf j90o g (4.66) It is anticipated that such a design will be as powerful as the other DV operator due to the facts that:  human vision is more sensitive to horizontal and vertical edges than to others  the horizontal and vertical di erence vectors are able to detect most of the diagonal edges as well, which in turn can reduce the thickness of these edges by eliminating the redundancy from the diagonal detectors. In addition, the amount of computation involved with this operator is slightly reduced.

4.5 Evaluation Procedures and Results To investigate further the performance of the vector order statistic operators and the di erence vector operators, it is necessary to determine how these two classes of operator compare to each other and how the individual edge detector in each class ranks among themselves. Both quantitative and qualitative measures are used to evaluate the performance of the edge detectors in terms of accuracy in edge detection and robustness to noise. The quantitative performance measures can be grouped into two types, probabilistic measures and distance measures. The rst type is based on the statistic of correct edge detection and false edge rejection. The second type is based on edge deviation from true edges. The rst type of measure can be adopted to evaluate the accuracy of edge detection by measuring the percentage of correctly and falsely detected edges. Since a pre-de ned edge map (ground truth) is needed, synthetic images are used for this experiment. The second

198

4. Color Edge Detection

type of measure can be adopted to evaluate the noise performance by measuring the deviation of edges caused by noise from the true edges [42], [43]. Since numerical measures are not sucient to model the complexity of human visual systems, qualitative evaluation using subjective tests is necessary in most image processing applications. Also, evaluation based on synthetic images has limited value because they cannot be extrapolated to real images easily [40]. As a result, real images are also used in the evaluation process. All the images used for evaluation are de ned in the RGB color space. A total of 24 edge detectors from the class of the vector order statistic operators and the di erence vector operators are implemented and their performance are evaluated along with the Sobel edge detector (see Tables 4.1 and 4.2).

Table 4.1. Vector Order Statistic Operators VR0

Vector Range operator (W: 3x3) (with L1 norm) VR1 Vector Range operator (W: 3x3) with Canberra Metric implementation VR2 Vector Range operator (W: 3x3) with Czekanowski coecient VR3 Vector Range operator (W: 3x3) with angular distance measure MVD 3 MVD operator (W: 3x3) with k=3, l=4 MVD 5a MVD operator (W: 5x5) with k=3, l=4 MVD 5b MVD operator (W: 5x5) with k=6, l=9 NNVR 3 NNVR operator (W: 3x3) NNVR 5 NNVR operator (W: 5x5) NNMVD 3 NNMVD operator (W: 3x3) with k=3 NNMVD 5a NNMVD operator (W: 5x5) with k=3 NNMVD 5b NNMVD operator (W: 5x5) with k=6

4.5.1 Probabilistic Evaluation Several arti cial images with pre-speci ed edges are created for accessing the probabilistic performance of selected edge detectors. In order to analyze the responses of the edge detectors to di erent types of edges, these images contain: vertical, horizontal, and diagonal edges; round and sharp edges; edges caused by variation in only one, only two or all three components; isoluminant and non-isoluminant areas. In this experiment, noise is not added to the images. The resulting edge maps from each detector are compared with the pre-de ned edge maps, and the number of correct and false edges detected are computed and are represented as hit and fault ratio as shown in Table 4.2 [39]. The hit ratio is de ned as the percentage of correctly detected edges and

4.5 Evaluation Procedures and Results

199

Table 4.2. Di erence Vector Operators DV DV hv

DV operator(W: 3x3) in four directions DV operator (W: 3x3) (in only horizontal and vertical directions) DVadap DV operator (W: 5x5) with adaptive sub lter DVadap hv same as DVadap except in only two directions DV trim DV operator (W: 5x5) (with trim sub lter) DVmean DV operator (W: 5x5) (with vector mean sub lter) DVmedian DV operator (W: 5x5) (with vector median sub lter) fDVadap DV operator (W: 3x3) (with adaptive pre- lter on entire window) fDVadap hv same as fDVadap except in only two directions fDValphatrim DV operator (W: 3x3) (with trim pre- lter on entire window) fDVmean DV operator(W: 3x3) (with vector mean pre- lter on entire window) fDVmedian DV operator(W: 3x3) (with vector median pre- lter on entire window)

the fault ratio is the ratio between the number of false edges detected and the number of true edges in the pre-de ned edge map. These two parameters are selected for this evaluation because they characterize the accuracy of an edge detector. From the results in Table 4.3, a few conclusions can be drawn: 1. Detectors such as the Sobel operator, VR operator with L1 norm, and DV operator without any ltering all give good performance for images free of noise contamination. 2. MVD with 3  3 window size has a lower hit ratio, but it also gives less false edges. The MVD operators with larger window size (e.g. 5  5) are able to provide high hit ratio. 3. The NNVR operators also show good performance but the NNMVD operators give a slightly lower hit ratio than those achieved by the MVD operators. 4. The L1 norm used in VR operators shows superior performance than other distance measures. 5. For the di erence vector operators, the detectors with only horizontal and vertical direction detection have almost the same hit ratio as the DV operator with all four directions, but they detect considerably less false edges. 6. The DV operator with adaptive and -trimmed sub ltering show very poor results. It is worth pointing out that this is not the case with real images, as will be seen later. The sub- ltering seems to have undesirable

200

4. Color Edge Detection

Table 4.3. Numerical Evaluation with Synthetic Images Edge Detector % Hit Fault Ratio Sobel VR0 VR1 VR2 VR3 MVD 3 MVD 5a MVD 5b NNVR 3 NNVR 5 NNMVD 3 NNMVD 5a NNMVD 5b DV DV hv DVadap DV trim fDVadap fDVadap hv fDV trim

97.9% 99.4% 93.9% 92.9% 91.3% 88.7% 99.2% 98.3% 99.4% 99.6% 87.5% 94.4% 93.6% 99.4% 99.1% 4.6% 60.5% 98.4% 97.7% 98.4%

1.21 1.55 1.49 1.48 1.46 0.95 3.33 1.53 1.55 4.01 0.95 3.3 1.51 1.55 1.14 0.06 0.65 2 1.58 1.99

e ects on synthetic images. When pre- ltering is performed (fDVadap, fDV -trim), this undesirable e ect does not exist and these operators show good performances.

4.5.2 Noise Performance Real images with corrupted mixed noise are used for this experiment. The mixed noise contain 4% impulsive noise and Gaussian noise with standard deviation ( = 30). The edge maps of the images corrupted with noise are compared with the edge maps of the original image for each edge detector. The noise performance is measured in terms of the PSNR values, and the results are given in Table 4.4. The PSNR is an easily quanti able measure of image quality, although it only provides a rough evaluation of the actual visual quality the eye may perceive in an edge map. A few observation can be made from the results: 1. The simple operators such as Sobel, VR and DV are sensitive to both impulsive and Gaussian noise. The noise performance can be improved with added complexity. 2. In the case of vector order statistic operators, the MVD and NNMVD operators show more robustness in the presence of noise. It can also be con rmed that the noise performance improves with the increase complexity of the operators, which are controlled by the two parameters k and l.

4.5 Evaluation Procedures and Results

Table 4.4. Noise Performance

Edge Detector Sobel VR0 DV MVD 3 MVD 5a MVD 5b NNVR 3 NNVR 5 NNMVD 3 NNMVD 5a NNMVD 5b DVadap DVadap hv DV -trimmed fDVadap fDVadap hv fDV -trimmed

201

PSNR 30.9 dB 24.4 dB 29.4 dB 26.3 dB 33.6 dB 35.4 dB 23.2 dB 28.6 dB 25.9 dB 33.5 dB 35.2 dB 52.4 dB 52.2 dB 45.5 dB 62.6 dB 62.3 dB 59.6 dB

3. For the class of di erence vector operators, the added ltering improve the performance drastically. Since mixed noise are present, adaptive and -trimmed lters are used for this experiment. The use of adaptive lters as pre- lters on the whole window demonstrates the best performance in noise suppression. Hence it can be concluded that the adaptive lter outperforms the -trimmed lter and the pre- ltering method is better than the sub- ltering method in terms of noise suppression. Operators in only the horizontal and vertical directions show very slight deviation in PSNR values from the ones in all four directions.

4.5.3 Subjective Evaluation Since subjective evaluation is very important in image processing, the forthmentioned operators have been applied to a collection of real and arti cial images ranging from face features to outdoor scenes. The subjective evaluation allows for further investigation of the characteristics of the obtained edge maps through the involvement of human factors. The operators are rated in terms of several criterion: (i) ease at organizing objects; (ii) continuity of edges; (iii) thinness of edges; (iv) performance in suppressing noise. The results obtained are in good agreement in all cases with the selected criterion [43]. After examining a large quantities of edge maps produced by each edge detector, the following conclusion can be drawn: 1. As suggested in the quantitative tests, the performance of Sobel, VR, and DV operators are very similar in that they all produce good edge maps for noiseless images.

202

4. Color Edge Detection

2. The MVD and NNMVD operators produce thinner edges and are less sensitive to small texture variations because of the averaging operation which smooth out small variations. Also as expected, these two groups of operators are able to extract edges even in noise corrupted images. 3. The two groups of di erence vector operators with sub- ltering and pre ltering all demonstrate excellent performance for noise corrupted images. The vector mean operator performs best in impulsive noise, vector median operator performs best in Gaussian noise, and adaptive and -trimmed operators perform best in mixed noise. The sub- ltering operator with adaptive lter is able to produce fair edge maps for real images despite its unsuccessful attempts with synthetic images during the numerical evaluation. However, the visual assessments are in agreement with the numerical tests in that the group of pre- ltering operators outperform the group of sub- ltering operators of the same lter. 4. One last note on the di erence vector operators is that the operators with only horizontal and vertical directions produce thinner diagonal edges than those in all four directions. The color images `ellipse', ` ower' and `Lenna' used in the experiments are shown in Fig. 4.3-4.5. The last image is corrupted by 4% of impulse noise and 30% of Gaussian noise. Edge maps of the synthetic image `ellipse' is shown in Fig. 4.6-4.9. The gures in Fig. 4.10-4.17 provides the edge maps produced by four selected operators for the test images ` owers' and `Lenna'.

4.6 Conclusion Accurate detection of the edges is of primary importance for the later steps in image analysis, such as segmentation and object recognition. Many effective methods for color edge detection have been proposed for the past few years and a comparative study of some of the representative edge detectors has been presented in this chapter. Two classes of operators, vector order statistic operators and vector di erence operators have been studied in detail because both of them are e ective with multivariate data and are computationally ecient. Several variations have been introduced to these two classes of operators for the purpose of better noise suppression and higher eciencies. It has been discovered that both classes o er a mean of improving noise performance at the cost of increasing complexity. The performance of all edge detectors has been evaluated both numerically and subjectively. The results presented demonstrate a superiority of the di erence vector operator with adaptive pre- ltering over other detectors. This operator scores high points in numerical tests and the edge maps it produces are perceived favorably by human eyes. It should be noted that di erent applications have di erent requirements on the edge detectors, and though some of the general

References

Fig. 4.3. Test color image `ellipse'

203

Fig. 4.4. Test color image ` ower'

Fig. 4.5. Test color image `Lenna' characteristics of various edge detectors have been addressed, it is still better to select edge detectors that are optimum for the particular application.

References 1. Treisman, A., Gelade, G. (1980), A feature integration theory of attention, Cogn. Psych. , 12, 97{136. 2. Treisman, A. (1986): Features and objects in visual processing, Scienti c America, 25, 114B{125. 3. A. Koschan, A. (1995): A comparative study on color edge detection, Proc. 2nd Asian Conf. on Computer Vision ACCV'95, III, 574{578. 4. Gonzales, R.C., Wood, R. E. (1992): Digital Image Processing. Addison-Wesley, Massachusetts. 5. Pratt, W.K. (1991): Digital Image Processing. Wiley, New York, N.Y. 6. Androutsos, P., Androutsos, D., Plataniotis, K.N., Venetsanopoulos, A.N. (1997): Color edge detectors: an overview. Proceedings, Canadian Conference on Electrical and Computer Engineering, 2, 827-831. 7. Clinque, L., Guerra, C., Levialdi, C. (1994): Reply: On the Paper by R.M. Haralick, CVGIP: Image Understanding, 60(2), 250{252. 8. Heath, M., Sarkar, S., Sanocki, T., Bowyer, K. (1998): Comparison of Edge Detectors, Computer Vision and Image Understanding, 69(1), 38{54.

204

4. Color Edge Detection

VR Operator Sobel Operator

Fig. 4.6. Edge map of `ellipse': Sobel detector

Fig. 4.7. Edge map of `ellipse': VR detector

DV_hv Operator DV Operator

Fig. 4.8. Edge map of `ellipse': DV detector

Fig. 4.9. Edge map of `ellipse': DV hv detector

9. Heath, M., Sarkar, S., Sanocki, T., Bowyer, K. (1997): A robust visual method for assessing the relative performance of edge-detection algorithms, IEEE Trans. Pattern Analysis and Machine Intelligence, 19(12), 1338{1359. 10. Sobel, L.E. (1970): Camera Models and Machine Perception, Ph. D dissertation, Standford University, California. 11. D. Marr, D., Hildreth, E. (1980): Theory of Edge Detection, Proceedings of the Royal Society of London, B-207, 187{217. 12. Zenzo, S.D. (1986): A note on the Gradient of a multi-image, Computer Vision Graphics and Image Processing, 36, 1{9. 13. Scharcanski, J., Venetsanopoulos, A.N. (1997): Edge detection of colour images using directional operators, IEEE Transactions on Circuits and Systems, xx, {. 14. Shiozaki, A. (1986): Edge extraction using entropy operator, Computer Vision Graphics and Image Processing, 36, 116{126. 15. A. Cumani, A. (1991): Edge detection in multispectral images," CVGIP: Graphical Models and Image Processing, 53, 40{51. 16. Tranhanias, P.E., Venetsanopoulos, A.N. (1993): Color edge detection using vector order statistics, IEEE Transaction on Image Processing, 2(2), 259{264..

References

VR Operator

Sobel Operator

Fig. 4.10. Edge map of ` ower': Sobel detector

DV Operator

Fig. 4.12. Edge map of ` ower': DV detector

205

Fig. 4.11. Edge map of ` ower': VR detector

DVadap Operator

Fig. 4.13. Edge map of ` ower': DVadap detector

17. Tranhanias, P.E., Venetsanopoulos, A.N. (1996): Vector order statistics operators as color edge detectors, IEEE Transaction on Systems Man and Cybernetics-Part B, 26(1), 135{143. 18. Scharcanski, J. (1993): Color Texture Representation and Classi cation, Ph.D. Thesis, University of Waterloo, Waterloo, Ontario, Canada. 19. S. Grossberg, S. (1988): Neural Networks and Natural Intelligence, MIT Press, Massachussets. 20. W.K. Pratt, W.K. (1991) Digital Image Processing, Jone Wiley, N.Y., New York. 21. Healey, G. (1992): Segmenting images using normalized color, IEEE Trans. on Systems, man and Cybernetics, 22(1), 64{73. 22. Poggio, T., Torre, V., Koch, C. (1995): Computational vision and regularization theory, Nature, 317. 23. Witkin, A. (1983): Scale-space ltering, Proceedings of the 8th Int. Joint Conf. on Arti cial Intelligence, 2, 1019{1022. 24. P. Beaudet, Rotationally Invariant Image Operators in Int. Joint Conf. on Pattern Recog., pp. 579 - 583, 1987.

206

4. Color Edge Detection

MVD(5a) Operator

Sobel Operator

Fig. 4.14. Edge map of `Lenna': Sobel detector

DVadap Operator

Fig. 4.16. Edge map of `Lenna': DV detector

Fig. 4.15. Edge map of `Lenna': VR detector

fDVadap Operator

Fig. 4.17. Edge map of `Lenna': DVadap detector

25. Y. Yang, Y. (1992): Color edge detection and segmentation using vector analysis, M.A.Sc. Thesis, University of Toronto, Toronto, Ontario, Canada. 26. Rosenfeld, A., Kak, A.C. (1982): Digital Picture Processing, Second Edition, Academic Press, N.Y., New York. 27. Nevatia, R. (1977): A color edge detector and its use in scene segmentation, IEEE Trans. on Systems, Man cand Cybernetics, 7(11), 820{825. 28. Robinson, G.S. (1977): Color edge detection, Optical Engineering, 16(5), 479{ 484. l 29. R. Machuca, R., Phillips, K. (1983): Applications of vector elds to image processing, IEEE Trans. on Pattern Analalysis and Machine Intelligence, 5(3), 316{329. 30. Alshatti, W., Lambert, P. (1993): Using eigenvectors of a vector eld for derivingt a second directional derivative operator for color images", Proceedings of the 5 h International Conference, CAIP'93, 149{156. 31. David, H.A. (1980): Order Statistics, Wiley, N.Y., New York.

References

207

32. Barnett, V. (1976): The ordering of multivariate data, J. Royal Statist. Soc. A, 139(3), 318{343. 33. Feechs, R.J., Arce, G.R. (1987): Multidimensional morphologic edge detection, Proceedings SPIE Conf. Visual Comm. and Image Processing, 845, 285{292. 34. Lee, J.S.J., Haralick, R.M., Shapiro, L.G. (1987): Morphologic edge detection, IEEE Journal of Robotic Automation, RA-3(2), 142{156. 35. Pitas, I., Venetsanopoulos, A.N. (1990): Nonlinear Digital Filters: Principles and Applications, Kluwer Academic Publishers. 36. K. Krzanowski, K., (1994): Multivariate Analysis I: Distributions, ordination and inference, Halsted Press, N.Y., New York. 37. Astola, J., Haavisto, P., Neuvo, Y. (1990): Vector median lters, Proceedings of the IEEE, 78(4), 678{689. 38. Plataniotis, K.N., Androutsos, D., Venetsanopoulos, A.N. (1997): Color image lters: The vector directional appoach. Optical Engineering, 36(9), 2375{2383. 39. Zhu, Shu-Yu, Plataniotis, K.N., Venetsanopoulos, A.N. (1999): A comprehensive analysis of edge detection in color image processing. Optical Engineering, 38(4), 612-625. 40. Zhou, Y.T., Venkateshwar, T., Chellappa, R. (1989): Edge detection and linear feature extraction using a 2D random eld model, IEEE Trans. on Pattern Analysis and Machine Intelligence, 11(1), 84{95. 41. Proakis, J. G. (1984): Digital Communications. McGraw Hill, New York, N.Y. 42. Androutsos, P., Androutsos, D., Plataniotis, K.N., Venetsanopoulos, A.N. (1997): Subjective analysis of edge detectors in color image processing. Image Analysis and Processing, Lecture Notes in Computer Science, 1310, 119-126, Springer, Berlin, Germany. 43. Androutsos, P., Androutsos, D., Plataniotis, K.N., Venetsanopoulos, A.N. (1998): Color edge detectors: a subjective analysis. Proceedings Nonlinear Image Processing IX, 3304, 260-267.

208

4. Color Edge Detection

5. Color Image Enhancement and Restoration

5.1 Introduction Enhancement techniques can be used to process an image so that the nal result is more suitable than the original image for a speci c application. Most of the image enhancement techniques are problem oriented. Image enhancement techniques fall into two broad categories: spatial domain techniques and frequency domain methodologies . The spatial domain refers to the image itself, and spatial domain approaches are based on the direct manipulation of pixels in the image. On the other hand, frequency domain techniques are based on modifying the Fourier transform of the image. Only spatial domain techniques are discussed in this chapter. The histogram of a monochrome image presents the relative frequency of occurance of the various levels of the image. Histogram equalization has been proposed as an ecient technique for the enhancement of monochrome images. This technique modi es an image so that the histogram of the resulting image is uniform. Variations of the technique known as histogram modi cation and histogram speci cation, which result in a histogram having a desired shape, have also been proposed. The extension of this histogram equalization to color images is not trivial due to the multidimensional nature of the histogram in color images. Pixels in a monochrome image are de ned only by gray-level values, so monochrome histogram equalization is a scalar process. On the contrary, pixels in color images are de ned by three primary values, which implies that color equalization is a vector process. If histogram equalization is applied to each of the primary colors independently, changes in the relative percentage of primaries for each pixel may occur. This can lead to color artifacts. For this reason, various methods have been proposed for color histogram equalization which spread the histogram either along the principal component axes of the original histogram or spread repeatedly the three two dimensional histograms. Other enhancement methods have been proposed recently which operate mainly on the luminance component of the original image. This chapter focuses on the direct three-dimensional histogram equalization approaches. The rst method presented is actually a histogram speci cation method. The speci ed histogram is a uniform 3-D histogram and consequently, histogram equalization is achieved. The theoretical background of

210

5. Color Image Enhancement and Restoration

the method is rst presented and then issues concerning the computational implementation are discussed in detail. Finally, experimental results from the application of this method are presented. A method of enhancing color images by applying histogram equalization to the hue and saturation component in the HSI color space is also presented.

5.2 Histogram Equalization In histogram equalization the objective is to obtain a uniform histogram for the output image. The theoretical foundation, underlying histogram equalization, can be found in probability theory. If an absolute continuous random variable (monochrome) X , a < X < b, with cumulative probability density function FX (x) = Pr(X x) is considered, then the random variable Y = FX (x) will be uniformly distributed over (0; 1). In the discrete case, the assumption of continuity of the variable X is not satis ed and therefore Y will be uniformly distributed only approximately. However, despite the approximate uniform distribution of Y , histogram equalization e ectively spreads the monochrome values resulting in a powerful techniques in image enhancement. For a three dimensional RGB color space you can proceed in an analogous manner. Consider three continuous variables R, G and B with a joint probability density function fR;G;B (r; g; b) and joint probability distribution function FR;G;B = Pr(Rr; Gg; B b). As in the scalar case three new variables Rs , Gs and Bs are de ned as: Rs = FR (R) Gs = FG (G) Bs = FB (B ) (5.1) The joint probability distribution function of Rs , Gs and Bs is given as: FRs ;Gs ;Bs (rs ; gs ; bs ) = Pr(Rs rs ; Gs gs ; Bs bs ) = Pr(FR (R)rs ; FG (G)gs ; FB (B )bs ) = Pr(RFR;1 (rs ); GFG;1 (gs ); B FB;1 (bs )) = FR;G;B (FR;1 (rs ); FG;1 (gs ); FB;1 (bs )) (5.2) If independence of the R, G and B components is assumed the last equation can be further decomposed as a product of the probability distribution functions of the three primaries: FRs ;Gs ;Bs (rs ; gs ; bs ) = FR (FR;1 (rs ))FG (FG;1 (gs ))FB (FB;1 (bs )) = rs gs bs (5.3)

5.2 Histogram Equalization

211

From (5.3):

frs;gs ;bs == FRs ;Gs;Bs (rs ; gs ; bs ) = 1

(5.4) From the above result it is concluded that the uniform distribution of the histograph in the Rs , Gs , Bs space is only guaranteed in the case of independent Rs , Gs , and Bs components. However, it is known that the three primaries at the RGB color space are correlated and thus, this assumption is not valid. Many methods have been proposed to overcome this diculty. Most of them spread the histogram along the principal component axes of the original histogram or spread repeatedly the three two dimensional histograms. However, since the aim is a uniform 3-D histogram, the problem can be viewed as a histogram speci cation one. In other words the 3-D uniform histogram is speci ed as the output histogram and therefore, histogram equalization is achieved. In the scalar case, the method of histogram speci cation works as follows. Assume that X and Y are the input and output variables that take the values xix and yiy , xix ; yiy = 0; 1;   ; L ; 1, with L the number of discrete gray-levels, with probabilities pX (xix ) and pY (yiy ), respectively. Then the following auxiliary parameters can be de ned:

CIx = C^Iy =

Ix X

ix =0 Iy X iy =0

pX (xix ); Ix = 0;   ; L ; 1

(5.5)

pY (yiy ); Iy = 0;   ; L ; 1

(5.6)

For the 3-D case, the following method is proposed for uniform histogram speci cation. Let X and Y be the input and output vector variables which assume as value triplets (xrx ; xgx ; xbx ) and (yry ; ygy ; yby ) with rx ; gx; bx; ry ; gy ; by = 0; 1;   ; L ; 1 and probabilities pX (xrx ; xgx ; xbx ) and pY (yry ; ygy ; yby ) The probabilities pX are computed from the original color image histogram. The probabilities pY are all set to L1 since there are L3 histogram entries and the same uniform distribution is wanted. The 3-D equivalents of the variables de ned in (5.3)-(5.5), CRx GxBx and C^Ry Gy By , are computed from pX and pY as follows: 3

CRx Gx Bx = C^Ry Gy By =

Rx X Gx X Bx X

rx =0 gx =0 bx =0 Ry X Gy X By X ry =0 gy =0 by =0

pX (xrx ; xgx ; xbx ) pY (yry ; ygy ; yby )

(5.7)

212

5. Color Image Enhancement and Restoration

=

Ry X Gy X By X

(Ry + 1) + (Gy + 1) + (By + 1)

1

3= ry =0 gy =0 by =0 L

L3

(5.8)

It can easily be seen that C^Ry Gy By can be computed as a product instead of a triple summation. Following that, the smallest Ry , Gy , By for which the inequality C^Ry Gy By ; CRx Gx Bx 0 (5.9) is true can be found. Summarizing, the following three steps constitute the above described method for 3-D histogram speci cation: 1. Compute the original histogram. 2. Compute CRx Gx Bx and C^Ry Gy By using (5.7) and (5.8), respectively. 3. For each value (Rx ; Gx ; Bx ) nd the smallest (Ry ; Gy ; By ) such that (5.9) is satis ed. The set of values (Ry ; Gy ; By ) is the output produced. Computationally, step 1 is implemented in just one pass through the image data. Step 2 can be implemented recursively, reducing drastically the execution time and memory requirements. Dropping out for simplicity the case where either of Rx , Gx and Bx is zero, CRx GxBx can be computed as: CRx Gx Bx = CRx; Gx; Bx; + CRx; Gx Bx +CRx Gx; Bx + CRx Gx Bx; ; CRx; Gx; Bx ;CRx; Gx Bx; ; CRx Gx; Bx; + pX (Rx ; Gx ; Bx ) (5.10) Step 3 presents an ambiguity since many solutions for the (Ry ; Gy ; By ) exist and satisfy (5.9). This ambiguity is remedied as follows. The computed value of CRx Gx Bx at (Rx ; Gx ; Bx ) is initially compared to the product P = 1 ^ L (Rx + 1) + (Gx + 1) + (Bx + 1), the value of CRy Gy By at (Rx ; Gx ; Bx ) since for a uniform histogram the value of this product should also be the value of CRx GxBx . In case of equality the input value is not changed. If CRx GxBx is greater (less) than P then the indexes Rx, Gx , and Bx are repeatedly increased (decreased), one at a time, until (5.9) is satis ed. The nal values constitute the output produced. The merit of this is twofold: (a) histogram stretching is achieved simultaneously in all three directions, and (b) the computational requirements are reduced since only a few values are checked. Because this method processes all three dimensions at once and maintains the basic ratio between the three primaries, it does not produce the color artifacts related to the independent processing. The overall computational complexity of the algorithm is manifested by step 2 which computes the cumulative histogram CRx GxBx for a total of L3 entries resulting in a O(L3 ) complexity. Histogram equalization and modi cation can be applied directly on RGB images. However, such an application causes signi cant color hue shifts that 1

1

1

3

1

1

1

1

1

1

1

1

1

5.2 Histogram Equalization

213

are usually unacceptable to the human eye. Thus, color image histogram modi cation that does not change the color hue should be utilized. Such a modi cation can be performed in coordinate systems where luminance, hue and saturation of color can be described. A practical approach to developing color image enhancement algorithms is to transform the RGB values into another color coordinate representation which can describe luminance, hue and saturation. In a such a system interaction between colors can cause change in all these parameters. For monochrome images, histogram equalization can be frequently utilized to increase high-frequency components of images and thus to enhance images with poor contrast. Experimentation with color images revealed that the high frequency components of the saturation value can be quite di erent from that of the luminance values. Currently, the most common method of equalizing color images is to process only the luminance component. Since most high frequency components of a color image are concentrated in the luminance component, histogram equalization is applied to only the luminance component to enhance contrast of color images, in color spaces, such as Y IQ, Y CB CR or the hue, luminance and saturation space discussed in [4]. However, there is still correlation between the luminance value and the chrominance values in these color spaces. Therefore, histogram equalization of the luminace component also changes chromatic information resulting in color artifacts. To alleviate the problem, an algorithm for saturation equalization was developed recently [5]. In this approach, histogram equalization is also applied to the saturation component obtained from the two chromatic channels. In all di erent approaches, after processing the new coordinates, the enhanced image coordinates are inverse transformed to the RGB components for display. One such system on which the modi cation can be performed is the HSI color space. Modi cation of the I or S components does not change the hue. In other words, a color characterized yellow, remains yellow when the algorithm changes its intensity and/or saturation, although a di erent yellow variant is produced. This observation suggests the application of histogram equalization or modi cation only to the I or S components. The cone shape of the HSI color space suggests non-uniform densities for I and S if an overall uniform density is desired for the entire RGB cube. If this fact is not taken into account when image intensity is equalized, many color points are concentrated near the points I = 0 and I = 1. The limited color space provided near these points causes distortion of the color image. Using geometrical concepts it is possible to de ne the probability density functions that will ll the HSI color space uniformly as follows:

fI (I ) = 12I 20I 0:5 = 12(1 ; I )2 0:5I 1

(5.11)

214

5. Color Image Enhancement and Restoration

fS (S ) = 6S ; 6S 20I 1

(5.12)

fIS (I; S ) = 6SS 2I , I 2[0; 0:5] = 0S > 0, I 2[0; 0:5] = 6SS 2(1 ; I ), I 2[0:5; 1] = 0S > 2(1 ; I ), I 2[0:5; 1]

(5.13) It can easily be shown that the marginal distributions of the joint pdf fIS (I; S ) are fI (I ) and fS (s), respectively. If X = (x1 ; x2 ;   ; xm ) needs to be transformed to Y = (y1 ; y2;   ; ym ) where Y must have a certain joint pdf fY (Y), rst derive the transformations Z = T1 (X) and S = T2 (Y) that equalize the random vectors X and Y and then combine the two transforms in one by means of: Y = T2;1[T1(X)] (5.14) Transformations of the form of (5.14) can be used for the modi cation of I , S either jointly or separately. Intensity modi cation of the form (5.14) and (5.11)-(5.13) have been proven by simulations to provide better results than the straightforward quantization of I . Modi cations of saturation S must be done with care. Many natural scenes posses very low saturation values. Sometimes the preferred color for image reproduction is a more saturated version of the original color. Transformed color images by using (5.12) or (5.13), tend to have highly saturated colors. In certain cases, the result of this modi cation may be too saturated and the colors may not appear natural. In such a case, a conservative saturation equalization is preferable. The mathematical formulation of such an equalization is dicult, because the subjective perception of acceptable saturation may di er from one image to another. For example, highly saturated colors seem to be appropriate in pseudo-coloring because they can enhance the visual representation of an image. To illustrate the receding discussion, Fig. 5.1 and 5.2 show an original image and the resulting image after the application of the histogram modi cation technique discussed in this section.

5.3 Color Image Restoration In the last decade, restoration of multichannel data has become increasingly important in a wide variety of research areas mainly due to the development of powerful digital electronics and computers, and the wide-spread applicability of color images processing techniques. These areas include the processing of color images, high de nition TV and video, remote sensing, environmental studies, astronomy, industrial inspection and biomedical applications [8]-[12]. In video applications the need for

5.3 Color Image Restoration

Fig. 5.1. The original color image `mountain'

215

Fig. 5.2. The histogram equalized color output

high quality images urges for increased bandwidth. Due to the huge amount of information encoded in color, multichannel processing and compression become crucial in the e ective transmission and storage of color images. In the eld of industrial inspection multichannel processing is used to obtain quality products and to isolate damaged parts. In the eld of robot guidance several video images are processed to acquire information about the environment and the position of the robot within this environment in order to guide the robot autonomously. A modern eld of image restoration applications concerns the decomposition of images into several subbands and/or resolution levels. Subband signal processing has evoked signi cant attention because of its potential to separate frequency bands and operate on them separately. Similarly, multiresolution signal processing employs wavelet transforms to decompose and represent an image at di erent levels of detail [13], [14]. The multiband and multiresolution signals can be considered as subclasses of multichannel images. In these and other related applications the data set is collected from a variety of sensors that capture di erent properties of the signal. In high de nition TV and video applications the data set is composed of high resolution color images obtained at di erent time instances. In multisensor robot guidance, various sensors receive information from di erent spatial locations. In multispectral satellite remote sensing signal information is distributed in di erent frequency bands that cover the visible and/or the invisible wavelength spectrum. Moreover, in satellite imaging the images are characterized by di erent levels of resolution. In the areas of environmental studies and astronomy the data is collected from di erent sources at di erent times instances and various frequency bands. In the area of biomedical applications many modalities exist, with possibly di erent bands in each modality. In multiresolution image processing that involves subband decomposition, wavelet and other orthogonal transforms, the image is characterized by several lev-

216

5. Color Image Enhancement and Restoration

els of resolution. In general, the processing of multiple image frames will be referred to as multichannel image processing. In this chapter, the problem of image restoration that refers in general to the inversion of processes that degrade the acquired image will be considered. From the multiple encoding of information, it becomes evident that color images convey much more information than single channel images. In most applications, the composite images are highly correlated. However, the corresponding channels are a ected di erently by physical phenomena. Information lost in one channel, through atmospheric absorption, for instance, may be present in the other channels. Restoring color or, in general, multichannel images becomes much more involved than the processing of single-channel images not only because of the increased dimensionality, but also due to the need for identi cation and exchange of information among all di erent channels. As an example, consider the case of multiframe coherent imagery, such as sequences of synthetic aperture radar (SAR) images. Optimal single-channel algorithms usually fail to restore the original image, especially in cases of low signal-to-noise ratio. Alternatively, temporal processing, or even a simple temporal averaging of multiple registered frames (with motion compensation), is ecient in suppressing the dominant speckle noise and in recovering the signal under consideration [15], [16]. When formulating algorithmic procedures for image restoration, there must be careful consideration of two factors: (i) the image formation process, and (ii) the stochastic models of the image and the noise process. It was argued in Chap. 2 that due to limitations in sensor technology, the observed image is often a degraded version of the original image. If the luminance of the observed scene is large, the sensor is saturated, operating in its nonlinear region. The captured image is also contaminated by noise that re ects either thermal oscillations in electronic devices or intensity quantization due to absorbency and other material limitations in regular lm cameras. The noise corruption is often signal dependent and multiplicative in nature. Furthermore, a fast movement of an object or the existence of an out of focus object introduce blurring and noise in the observed image. Overall, degradation processes are encountered in the data, such as non-linear transformation, bandwidth reduction and noise contamination. The degradation process usually varies from pixel to pixel along the area of the observed image, and its e ect is space-variant. This means that the restoration process that inverts the degradation should also be space variant. In addition, a realistic image model capturing the statistical properties of the observed scene is highly non-stationary. Image characteristics, such as sharp intensity and color transmissions, edges and texture di erences imply varying statistical properties within the image area. Consequently, the stochastic model describing the image process should be non-stationary. These aspects are considered along with algorithms developed for color image restoration. An overview of these

5.4 Restoration Algorithms

217

algorithms is provided along with conclusions and open problems for further research.

5.4 Restoration Algorithms Image restoration is practically an ill-posed problem. Through linear degradations that cause bandwidth reduction, signi cant amount of visual information is lost in the acquisition process. Thus, linear degradation kernels possess small or even zero eigenvalues, whose inversion results in excessive noise ampli cation and renders the process of image restoration unstable. Most image restoration approaches can be interpreted as attempts to regularize this ill posed problem. Such approaches are based on speci c assumptions and re ect their limitations as artifacts in the restored estimate. Due to the coupling of information among the channels, such artifacts are much more signi cant in multichannel images. Computational errors due to simplifying approximations tend to accumulate in multichannel images, since errors in one channel commute and a ect the processing of all other channels. Thus, the requirements for accurate modeling are much more strict in the restoration of color images than in the processing of monochrome images. Initiating research in multichannel images, several aspects of color image restoration were addressed in [17]. The Karhunen-Loeve (KL) transformation was introduced as a tool in de-correlating the color image set, so that each channel can be processed independently of the others [18], [19]. The decorrelation approach and the subsequent independent restoration of channels is valid only under restrictive assumptions in the imaging model [20]. Moreover the KL transform does not help in de-correlating the cross channel degradation e ects or the channel correlated noise process. A more natural approach to color image processing calls for multidimensional image processing techniques. Initial attempts in this direction provide extremely promising results. The minimum-mean-square-error (MMSE) restoration scheme and the Wiener ltering approach were extended to the case of multichannel images [21]. The Wiener lter approach has been also used in the multiframe restoration of sequences of images [22]. Even though the limitations of the Wiener lter are well established, the results obtained from its multichannel extension are very encouraging. Relaxing the requirement for the stationarity assumption within each channel, which is essential in the ecient implementation of the the Wiener lter, a three-dimensional (3-D) Kalman ltering approach in multichannel image restoration was developed in [18]. A related autoregressive approach is proposed in [22] using an extended 2D Kalman lter, whose coecients capture the relationship among channels. The improvement, however, achieved through this technique over the stationary MMSE implementation is insigni cant [18]. This performance is directly attributed to the limitations of the quadratic error function associated with the linear MMSE estimation approach. Typical regularized approaches, such

218

5. Color Image Enhancement and Restoration

as the constrained least-squares (CLS) and the Tikhonov-Miller formulations, attempt to compensate for the ill-posedness of the pseudo-inverse solution by utilizing smoothness information in the restoration process [23], [24]. In view of this inherent characteristic, the regularization approaches can be related to the stochastic maximum a-posteriori probability (MAP) estimation of the original image. The inversion of the degradation processes is achieved by estimating the original image from the data assuming speci c forms of the prior and posterior distribution functions. Motivated by the success of regularized approaches in monochrome images the CLS estimation scheme was applied in the restoration of multichannel images in [25]. Even though the CLS approach is a powerful technique that enables the incorporation of a priori information by means of constraints, it de nes and operates on quadratic metrics. Su ering from the nature of its penalizing functions, it does not encourage the reconstruction of sharp edges and it often produces noise and ringing artifacts. The maximum likelihood approach to the multichannel inversion problem has been also considered. In particular, the iterative expectation maximization (EM) algorithm has been extended to the multichannel domain and has been used for blur identi cation and restoration [26]. Despite its increased computational complexity, this iterative technique has comparative advantages over other techniques in cases where the blur operator, and the signal and noise power spectra are unknown. When the degradation operator is known, however, it reduces to an iterative Wiener algorithm. Several properties of the signal and the noise processes can be expressed through mathematical constraints de ned on a set theoretic environment. Motivated by the success of projection algorithms in single-channel restoration, multichannel constraints were introduced in [27] leading to the extension of the projection-onto-convex-sets (POCS) approach to the color case. These multichannel constraints capture the available information regarding the original image in the general structure of prototype estimates. The POCS approach is closely related to the concept of regularization of ill-posed problems through set-theoretic considerations. In essence, the set-theoretic approach constrains the estimation space by means of convex ellipsoids, de ned through quadratic constraints [25]. A general framework for the restoration of multichannel images in the frequency domain is presented in [28]. The regularization approaches that can be modeled in this form re ect deterministic constraints on the estimate, which aim to restrain the ampli cation of noise. However, such a universal deterministic constraint suppresses the high-frequency components of the estimate with no respect to the stochastic structure of the image under consideration. Thus, such approaches do not encourage the reconstruction of sharp edges and often produce noise and signal artifacts.

5.4 Restoration Algorithms

219

A structured regularized approach referred to as the constrained meansquare-error (CMSE) estimation scheme is developed in [15]. The smoothing functional in the CMSE restoration approach re ects the structure of the multichannel MMSE estimate, which is essentially utilized as a prototype constraint image. In contrast to the POCS algorithm, the MMSE estimate is not used to de ne hard constraints on the estimate space, but is rather employed as a means of in uencing the structure of the solution. Thus, the CMSE approach always derives a meaningful estimate, which is conceptually located between the MMSE estimate and the pseudo-inverse solution. This approach enables the suppression of streak and ringing artifacts, but does not account for the representation of sharp edges. In order to preserve discontinuities (edges) the multichannel data are modeled as a Markov random eld using a Gibbs prior [29]. This approach incorporates non-stationary intra-channel and inter-channel correlations allowing the preservation and reconstruction at sharp edges. The previous approaches consider variations of quadratic objective functions that re ect Gaussian stochastic models for both the signal and the noise statistics. Quadratic functions are attractive, because they enable the analytic derivation of the corresponding estimators and provide cost-ecient implementation. The Gaussian model, however, does not cover most realistic noise sources, which are characterized by Poisson, Laplacian, binomial, or even signal dependent statistics. Furthermore, the Gaussian distribution cannot characterize the vast majority of images, whose histograms are not even unimodal. Following the channel interactive nature of color processing, it is expected that the limitations of linear algorithms are even more restrictive in the case of multichannel images. Most multichannel applications, including color images as well as satellite remote sensing, SAR and infrared time-varying imagery involve noise models whose distributions possess long tails, such as Cauchy or Laplacian. Moreover, transmission errors in digital channels, as well as abrupt natural phenomena, manifest themselves through the creation of impulses in the image, which can be interpreted as outliers in the actual or assumed noise distribution. Restoration algorithms must re ect the long-tailed characteristics of such noise processes. In addition, they must incorporate good models for the signal statistics that permit the representation of sharp edges. There is growing demand for robust multichannel algorithms that can e ectively handle the large variation of signal and noise statistics encountered in practice [30], [31]. The concept of robust estimation has been extensively utilized in the case of monochrome images. Robust approaches have been developed in stochastic environments where the noise statistics are approximated by models that allow uncertainty in the form of noise outliers. Informally stated, an estimator is called robust if it is almost optimal at the nominal noise distribution, and it is only slightly a ected by the presence of outliers in the noise distribution, which re ect the uncertainty in the noise model [31], [32]. For example,

220

5. Color Image Enhancement and Restoration

the class of M-estimators, is obtained from the generalized maximum likelihood approach. In this approach, the objective function deviates from the quadratic structure at large errors, as to restrain their contribution to the overall criterion. The optimal properties of the M-estimators are derived on the basis of the minimax formulation, which minimizes the worst (maximum) estimation error within a speci c class of noise processes [31], [33]. The framework for the extension of robust approaches to the multichannel case by means of a generalized MAP formulation is developed in [34]. Nonlinear estimators derived through robust probabilistic measures have been employed in the restoration of images corrupted by mixed-noise processes, with impressive success [30]-[34]. Mixed noise processes involve both mediumtailed noise, such as Gaussian noise, and processes with long-tailed statistics, such as exponential and impulsive noise. The non-quadratic objective functions can be also utilized in the representation of the prior signal statistics. Such functions have been motivated in maximum a posteriori (MAP) formulations under Markov random elds with non-quadratic Gibbs distributions [29]. These formulations have been proved useful in applications, such as emission tomography [35] and reconstruction of three-dimensional (3-D) surfaces from range information [36]. A novel view to the modeling of the detailed structure, in which the existence of sharp edges manifests uncertainty regarding the distribution of the signal was presented in [34]. In essence, sharp edges can be considered as outliers applied to an assumed medium-tailed (possibly Gaussian) signal distribution. Thus, to account for uncertainty in the overall restoration process, robust functionals are motivated in the representation of not only the noise, but also the signal statistics. In the next section the framework for the development of multichannel (color) restoration algorithms is provided.

5.5 Algorithm Formulation 5.5.1 De nitions Consider the formation of p channels. For the kth channel, let fk and gk denote the original and the degraded image, nk denote the noise vector, and Hkk represent the channel point-spread function (PSF), all in vector ordering. The image and the noise vectors are considered of dimensionality fN  1g with fN = M 2 g, where fM  M g is the dimensionality of the 2-D problem, and the PSF operator Hkk is an fN  N g matrix. Through the lexicographic notation, the original image vector is written as:

f = [f1t : : : fkt ]T

(5.15) where T denotes the transpose operation. The vector notation results from the multichannel representation by arranging rows above columns within each

5.5 Algorithm Formulation

221

channel, and then arranging channels on top of each other. In general, the degradation matrix for the k-th channel couples all the components of the original image. In the case of a color image (p = 3), the overall degradation matrix is written in block form as:

2H H H 3 11 12 13 H = 4 H21 H22 H23 5 H31 H32 H33

(5.16)

with dimensionality equal to (3N 3N ). This formulation is general enough to cover a variety of multichannel image processing applications where linear degradation operators are involved. For instance the block elements Hij can encode channel limitations in multispectral and color imagery. Moreover, they encode geometric ane and scale transformations in multisensor robotic applications, in registration or alignment of image data sets, in video and time-varying sequences. In these cases, the component block images fi represent sequential frames and/or images obtained from a number of displaced cameras. The block elements Hij may implement L-D projection operators as in biomedical applications, or non-invertible data reduction transforms as in lossy image compression. These elements can also incorporate decimation e ects in the modeling of sub-band, wavelet and other orthogonal decomposition and/or multiresolution approaches. This representation facilitates the analysis and the optimal design of sub-band and multiresolution lters and is particularly useful in image interpolation and image reconstruction from limited sub-band information [37], [38]. The diagonal block elements Hij represent intra-channel, and band or frame e ects and degradations. The o -diagonal block elements Hij , i 6= j , enable the consideration of channel interference in the image formation process, representing channel leakage in multispectral imagery, or registration errors in time-varying sequences, for instance [15], [25]. In addition, these elements can be utilized for the simultaneous registration and restoration of time-varying images. Let Hk represents the k-th block row of the PSF matrix: Hk = [Hk1 : : : HkK] (5.17) Following these de nitions, the formation of the k-th data channel is written as: gk = Hk f + nk (5.18) Equivalently, the overall image formation model of p channels is given by: g = Hf + n (5.19) where the data vector g and the noise vector n are de ned similar to (5.15). This model involves linear degradation or bandwidth reduction and additive noise. Even though it does not exhaust the degradation factors that may a ect

222

5. Color Image Enhancement and Restoration

a multichannel image it covers many useful data formation processes and has been extensively used due to its simplicity and its potential in deriving e ective inversion operators. At this point some fundamental di erences between the monochrome and the color formulations will be discussed. In the case of monochrome images, the assumptions of wide-sense stationarity and space invariance are often used. These assumptions lead to block-circulant matrix forms, whose eigenvalue decomposition is easily performed in the 2-D discrete Fourier transform (DFT) domain. Consequently, the invertibility of combined matrix operators involved in the computation of the restored image estimate is also veri ed in the 2-D DFT domain. In particular, the regularizing operator can be easily selected in relation with the PSF matrix H. In the usual case of a low-pass operator H, it suces to select a Laplacian high-pass regularizing lter that stabilizes all small eigenvalues of H, in order to derive a well-posed inversion process and guarantee the uniqueness (and stability) of the corresponding solution. The stationarity assumption is unrealistic in the overall characterization of the multichannel image, because each channel captures di erent features of the image. Moreover, overall space invariance in the imaging model is unjusti able [39]-[43]. Each pair of two speci c frames embodies the relationship between speci c characteristics of the image; wavelength relationship in multispectral imagery, or temporal association in time-varying sequences. A reasonable consequence of this coupling is the assumption of stationary interference only within pairs of speci c frames. Thus, for the multichannel model stationarity and space invariance are assumed only within each pair of channels. This assumption results in the henceforth called partially block-circulant structure, whose composite block matrices are in block-circulant forms and can be diagonalized through the 2-D DFT operator [18]. Nevertheless, these blocks may not be related and can be arranged in any structure within the partially block-circulant matrix. This assumption has been employed in the implementation of multichannel algorithms with particular success [8], [17], [18] since it provides considerable reduction of the computational complexity and reasonable characterization of multichannel interaction. Typical operations with partially block-circulant matrices are eciently implemented in the so-called multichannel DFT domain. The transformation of a multichannel vector x in this domain is performed through a multiplication with the block matrix W [18]: 2W 0 ::: 0 3 60 W ::: 0 7 Wx =4 664 .. .. .. .. 775 x (5.20) . . . . 0 0 ::: W where W denotes the 2-D DFT matrix.

5.5 Algorithm Formulation

223

The transformation of a partially block-circulant matrix A in the multichannel DFT domain is also expressed as matrix multiplication:

A = WAWT

(5.21) The resulting matrix A is composed of diagonal blocks. This particular matrix type is referred to as partially block-diagonal matrix. Operations with such matrices preserve the partially block-diagonal structure. The multiplication of a multichannel image vector with a partially block-diagonal matrix can be decomposed into single-channel multiplications in the 2-D DFT domain. Moreover, the inversion of partially block-diagonal matrices can be eciently performed in two ways. The rst one is a recursive technique [18], while the second method is based on the inversion of small matrices [17]. Thus, the computational complexity for computing the restored image estimate is small, despite the large dimensionality of the problem. In the sequel two classes of algorithms, direct and robust, are considered. These two classes involve well established approaches and provide the framework for the development of novel algorithms with speci c properties that can be used in specialized applications.

5.5.2 Direct Algorithms In this class, algorithms that derive their estimates in one step are considered. Most of them are derived from variations of either the MAP formulation or the regularization theory applied to the ill-conditioned restoration problem. Their primary goal is to provide an analysis of the ill-posed problem through the analysis of an associated well-posed problem, whose solution will yield meaningful answers and approximations to the ill-posed problem [34]. In broad perspective, these two approaches can be related in terms of the constraints imposed on the ill-posed problem. The MAP estimate is computed by maximizing the log-likelihood function: ^f = arg maxf flog Pr(f jg)g (5.22) where Pr(f jg) is the posterior density, or equivalently: ^f = arg maxf flog Pr(f jg) + logPr(f )g (5.23) Introducing the data formation model and considering the noise n process uncorrelated from the image f , the optimization problem reduces to: ^f = arg maxf flog Pr(g ; Hf jf ) + logPr(f )g (5.24) Assuming general exponential distributions, the problem is equivalently expressed as: ^f( ) = arg fminf Q( ; f )g = arg fminf fRn (g ; Hf ) + Rf (Cf )gg (5.25)

224

5. Color Image Enhancement and Restoration

where is referred to as the regularization parameter. In the MAP formulation presented, this parameter depends on the variances of the signal and the noise processes. The functionals Rn (:) and Rf (:) are referred to as Rn (:) and Rf (:) as the residual and the stabilizing terms, respectively. For independently distributed Gaussian noise and Gaussian prior distribution, the functionals Rn (:) and Rf (:) reduce to quadratic norms on weighted Hilbert spaces, de ned as: and

4x L x Rn (x) =k xk2L = t n n

(5.26)

4 xt L x Rf (x) =k xk2L = f

(5.27) where Ln and Lf are diagonal weight matrices characterizing the corresponding spaces. The general form of the optimization problem in (5.25) can also be obtained from the Tikhonov-Miller approach, which regularizes the ill-posed restoration problem through the stabilizing functional Rf (:) [25]. In the last formulation, the regularization parameter is set to the ratio (=E ) of two bounds  and E . The rst bound represents the delity of the solution to the data, whereas the second bound indicates a smoothness requirement on the solution. Similarly, the CLS approach minimizes the functional Rn (g ; Hf ) under the constraint that Rf (Cf ) remains bounded. The multichannel CLS criterion is solved in [25] geometrically, by nding the center of one of the ellipsoids that bound the intersection of the two individual ellipsoids de ned by: k g ; Hf k2  2 (5.28) and k Cf k2  E2 (5.29) respectively. In fact, several regularized approaches have appeared in the literature for the formation of similar optimization problems utilizing quadratic functionals. These approaches derive the same form of estimator ^f( ) and di er only on the de nition of the regularization parameter [40]. This parameter controls the e ect of the stabilizing term on the robust least-squares term and, consequently, the quality of the nal estimate. A cross validation approach for the selection of this parameter is extended to the multichannel case in [41]. With these weighted norms, the solution of the MAP criterion can be obtained analytically as: f

^f( ) = Ht LnH + Ct Lf C;1 HtLng

(5.30) This solution represents the estimates of the MAP approach with Gaussian distributions, the CLS formulation, the Tikhonov-Miller formulation, and the

5.5 Algorithm Formulation

225

set theoretic formulation, extended to the multichannel representation [25], [29], [41], [42]. The operator C represents a linear, typically highpass operator in the form of (5.16). It can take the form of the 3-D Laplacian lter, or the 2-D Laplacian lter applied independently on the di erent channels of the image [25]. In addition, a channel adaptive form C is proposed in [25], where one channel a ects another channel according to the similarity of overall brightness in these channels. Other forms of C to re ect multichannel correlation are proposed in [15], [29], [41]. A simpli ed form is obtained if the weight matrices are equal to the unit matrix I. Similar to (5.30), the corresponding estimate is expressed as: ^f( ) = Ht H + Ct C;1 Htg (5.31) It is also interesting to note that other estimates can be brought to the form of (5.30). Consider for instance the Wiener estimate expressed as [18]:

^f = R Ht HR Ht + Rnn;1 g (5.32) where R and Rnn represent the autocorrelation matrices of the multichannel signal and noise processes, respectively. For invertible matrices A and B, the following property is easily proved:

Ht HAHt + B;1 = HtB;1HA + I;1 HtB;1

(5.33)

Using this property, the Wiener estimate can be expressed as: ^f = R Ht R;nn1HR + I;1 Ht R;nn1g





= Ht R;nn1 H + R; 1 ;1 Ht R;nn1g (5.34) This is the exact form of the estimate in (5.30) with weight matrices f Lf = R; 1g, fLn = R;nn1g and fC = Ig. If Ht commutes with R;nn1 then the Wiener estimate can be written as:

^f = Ht H + RnnR; 1;1 Ht g (5.35) which is similar to (5.31) with f Ct C = RnnR; 1g. In computing the estimate ^f the multichannel DFT transform W is used

that requires the inversion of partially block diagonal matrices. Even though there exist methods for testing the invertibility of such matrices, there exist no straightforward procedure to select the operator C for given H, such as to guarantee the invertibility of the matrix fHt H + Ct Cg, in general. This diculty motivates the use of iterative algorithms for the multichannel image restoration problem, even in the case of quadratic functionals that allow the analytic derivation of the estimate. Nevertheless, whenever an inversion of a partially block-diagonal matrix is required, the use of the algorithm

226

5. Color Image Enhancement and Restoration

which decomposes the entire inversion process into inversions of small (pp) matrices is recommended [25]. This algorithm allows independent regularization of the individual inversions, thus resulting in stable implementation schemes. The recursive algorithm in [18] su ers from singularities caused by numerical computations. This algorithm is extremely sensitive, especially when applied to operators involving correlation matrices that often re ect a large condition number in the inversion. In the previous estimates, the regularizing operator C is uniformly applied on the estimate to ensure global smoothness. It has a partially block circular structure with each block representing a high-pass lter kernel. This operator is de ned independent of the structure of the data and thus, it cannot account for non-stationary formations that locally appear in the ideal image f . As a side e ect of this ineciency, the restored image does not recover sharp edges and/or su ers from the creation of artifacts. To alleviate this problem, the CMSE approach de nes the regularizing term: Rf (Cf ) = Efk C(f ; ^f)k2 g (5.36) This approach incorporates the MMSE (Wiener) estimate as prior knowledge in the restoration process and in uences the restored image towards the structure of the Wiener estimate. The CMSE approach can be interpreted as a regularized optimization scheme, which utilizes the prototype Wiener structure in smoothing its estimate. This approach enables the e ective suppression of streak artifacts created in the restored image, especially for regularization parameters that lead to restoration of sharp edges. Another attempt to in uence the MAP estimate through local prior information in order to maintain spatial discontinuities is presented in [29]. It de nes a multispectral image model through a Gibbs prior over a Markov random eld containing spatial and spectral clique functions . The estimate is obtained as: ^f = arg minf fX Vc (f )+ k g ; Hf k2Lg (5.37) cC

where Vc (:) is a function de ned on a local group of points c, called cliques, and C is the set of cliques within the image. The cliques are de ned in both the spatial and the spectral domains. In the spatial domain, the local cliques are de ned separately on each channel and provide a measure of the local signal activity in each channel. The spatial clique functions have the form of high pass lters. The individual function for each clique operates similar to Cf in (5.29). These functions must favor smooth prior distributions but they must not penalize sharp signal deviations too severely in order to allow for the restoration of sharp edges. To ensure these properties robust metrics on the result of linear high pass

5.5 Algorithm Formulation

227

lters, one de ned for each clique, were used in [29]. The aspects of robust functions is considered extensively in the next section. In the spectral domain, clique functions are used only along the image edges to incorporate spectral information and align object edges between frequency planes. The application of each clique functions is performed locally, following the result of spatial edge detectors. The alignment of edges in multichannel image restoration is important, since it can eliminate false colors that can result when frequency planes are processed independently [29].

5.5.3 Robust Algorithms The MAP restoration approach derives linear estimates under the assumption that both the signal and the noise are samples from Gaussian elds. Several limitations of this approach arise from the underlying stochastic assumptions. In image restoration applications, not only the noise statistics, but also the signal statistics are determined under uncertainty. The Gaussian distribution characterizes the noise process in only a narrow range of applications. It is worth mentioning the need for ltering speckle noise in SAR images and Poisson distributed lm-grain noise in chest X-rays, mamograms, and digital angeographic images [43]-[45]. In addition, the Gaussian assumption induces severe smoothing in the representation and the restoration of the detailed structure of the original signal [30], [34]. Artifacts created by linear algorithms are even more pronounced in the case of multichannel image processing, due to the coupling of information among the channels and the propagation of errors. This section reviews the framework for the development of robust regularized approaches that address the accurate representation of both the noise and the signal statistics. In order to account for and tolerate stochastic uncertainty in the restoration process, the concept of robust functionals globally in the noise and the signal distributions is considered. The robust approach for the multichannel problem, has been interpreted as a generalized MAP approach in [33]. A non-quadratic kernel function rn (:) is applied on the entries of the residual-error vector fg ; Hf g constructing the functional:

0

1

N N X 4X Rn (g ; Hf ) = rn @g[m] ; H[mj]f [j]A m=1 j=1

(5.38)

where H [mj ] denotes the mj -th scalar element of the matrix H, and f [m]; g[m] denote the m-th scalar elements of the vectors f and g, respectively. According to the generalized MAP formulation, the robust functional Rn (:) induces a non-Gaussian noise distribution Prn which, computed at the residual, re ects the following conditional distribution of g given f : Prgjf = Prn jn=g;Hf = Knexp f;n Rn (g ; Hf )g (5.39)

228

5. Color Image Enhancement and Restoration

with Kn and n representing the normalizing constants of this distribution. Since large deviations from zero are penalized lighter by the robust metric than by a quadratic metric, this distribution can assign signi cant probability to large values and supports the existence of long tails. The distribution in (5.39) with an absolute-value functional represents the Laplacian distribution, while it can still re ect the Gaussian distribution with a quadratic functional. Moreover, the Huber measure in (5.39) enforces robust performance in the presence of outliers and derives asymptotically ecient estimates [31]. Alternatively, the robust metric on the signal space can be selected as to re ect long tails in the signal distribution and allow the accurate representation of the detailed structure. This term de nes a robust functional Rf (:) on the signal space, based on a non-quadratic kernel function rf (:), as:

0N 1 N X X 4 Rf (Cf ) = rf @ C[mj]f [j]A m=1

j=1

(5.40)

where C [mj ] denotes the mj -th scalar element of the matrix C. The prior distribution induced by the signal functional in (5.40) is given by: Prf = Kf exp f;f Rf (Cf )g (5.41) where Kf and f are the normalizing constants of the signal distribution. The operator C is de ned again as a high pass operator, possibly having the adaptive form in [25] or the combined clique form in [29]. The generalized distribution Prf essentially characterizes the highpass content of the image f. The quadratic stabilizing function utilized in conventional regularized approaches causes a smoothing in uence on sharp edges, degrading the detailed structure of the estimate. In contrast, a robust function allows the existence of sharper transitions in the estimate, since it penalizes such deviations lighter than the quadratic scheme. The robust measures Rf (:) and Rn (:) on the domains of the signal and the noise represent functionals which pertain robust characteristics, so that an uncertainty related to either the noise or the signal distribution does not degrade signi cantly the quality of the estimate. The signal kernel function rf (:) and the noise kernel function rn (:) are de ned in terms of their derivatives f (:) and n (:), respectively, which in a robust estimation environment are referred to as the in uence functions. Overall, the noise function accounts for ecient representation of the noise statistics and provides robustness with respect to noise outliers, whereas the signal function accounts for ecient representation of the signal statistics and for e ective reconstruction of sharp edges in the estimate. The gradient descent derivation of the robust multichannel algorithm updates the estimate on the basis of the gradient. More speci cally:

fi+1 =4 fi ; lrtf Q( ; fi) = fi + lHtn(g ; Hfi) ; l Ctf (Cfi ) (5.42)

5.6 Conclusions

229

with l representing the iteration parameter. This algorithm is eciently implemented in a mixed multichannel DFT and image space domain. In the former domain vector/matrix multiplications are computed wheres in the latter the point operations of the in uence functions are performed. The convergence of such gradient descent algorithms has been extensively studied in the case of gray-level images. A sucient condition for convergence requires that the in uence functions n (:) and f (:) be continuous almost everywhere and be non-decreasing functions [46]. In this case, the mapping de ned in (5.42) is non-expansive. Moreover, in image restoration applications the estimate of an iterative algorithm can be restricted to a closed, bounded, and convex subset of RN through the use of a non-expansive constraint operating on the gray-levels of the estimate [47]. The robust algorithm in (5.42) utilizing such a constraint de nes a nonexpansive mapping on a closed, bounded, and convex set. This algorithm is guaranteed to converge to one of its xed points, for l appropriately selected [47]. The soft limiter as an in uence function describes the Huber error measure [29], [31], [46] which is extensively employed within the noise kernel function rn (:). This measure has been successfully used in applications with noise outliers [33]. Its operation in the residual term requires the speci cation of a structural parameter that can be derived using typical stochastic information regarding the noise process, such as the noise variance. In general, the soft limiter associated with the residual term is considered. The Huber measure also has been used along with the stabilizing term [29]. To maintain spatial discontinuities for this term, however, functions are needed whose robust performance is independent of structural parameters, since the speci cation of such parameters is dicult due to the lack of information regarding the signal process. Two classes of robust functions devoid of structural parameters have been proposed. The rst class involves the lp -norms, 1 < p < 2 [48], whereas the second class involves entropic functions that operate in accordance to the human visual system [46]. In particular, the l1:2 -norm [48] and the absolute entropy function in [46] is recommended to be used as kernel functions rf (:) in (5.25).

5.6 Conclusions Color image restoration nds important applications in a variety of elds. However, the inversion of degradation processes in multichannel data is involved not because of the increased dimensionality of the problem, but rather due to the peculiarities of the factors that a ect multichannel images. The need for accurate modeling and identi cation of the intra- and the interchannel correlation characteristics of the image is of critical importance. Moreover, the restoration algorithm must deal eciently with information

230

5. Color Image Enhancement and Restoration

exchange among all di erent channels. The critical issues that determine the success of a particular restoration approach are: 1. accurate blur identi cation; 2. ecient modeling and identi cation of color prior and posterior distributions; 3. appropriate modeling of the constraint operators employed by the restoration algorithm. 4. appropriate use of a priori information. The study of all the issues has been restricted to the frame work imposed by the partially block circulant structure of multichannels operators. The concept of color blur identi cation has been treated only within the assumption of space invariance within each channel and between each pair of channels. The EM algorithm has shown good potential in the computation of the particular block circulant components of such an operator structure [26]. Moreover, from the processing of gray scale images the neural networks emerge as promising tools for blur identi cation [47]. The stochastic form of the multichannel prior and the posterior distributions is the issue that seems to receive the most signi cant attention. Nevertheless, the structure of log likehood functions preserve partial blockcircularity, mainly due to the computational eciency of resulting algorithms. This speci c structure implies wide-sense stationarity within channels and pairs of channels. Only a few approaches deviate from this assumption by incorporating either local-edge information from the data or apriori information by means of a prototype constrained image. In addition, only a few approaches break away from the Gaussian model. The development of robust multichannel algorithms presents an important challenge for accurate modeling of the distributions of the signal and the noise processes, at least locally. The multichannel operators employed in restoration have been only considered heuristically. Their e ects on the restored estimate have not been carefully analyzed nor thoroughly understood. Furthermore, the concept of the prior information has not been utilized e ectively. It appears that such useful information could be used in all aspects of image restoration, from identi cation of the degrading operator, to the modeling of signal and/or noise statistics, to the structure of the restoration algorithm and its constrained operators. Towards the study of the four issues in the multichannel restoration mentioned above, the wavelet analysis of the multichannel problem can play a determinant role. To justify this argument it is worth mentioning some results from the gray scale processing that trace the utility of wavelet analysis in studying and designing restoration algorithms. Consider an image f in its vector form of dimensionality N  1. The multiresolution analysis utilizes an orthonormal wavelet basis and decomposes

5.6 Conclusions

231

the original signal into its projection to a lower resolution space and the detail signals [48]. Because of a dyadic increase in the duration of each basis function in the new space, this transformation implies decimation of the composite images by a factor of two in each direction. The original image can be exactly reconstructed from the multiresolution image. Each sub-image can be equivalently obtained through a ltering operation followed by dyadic decimation. The last approach leads to the subband decomposition of the image which, under speci c assumptions, becomes equivalent to the multiresolution decomposition. The rst level of multiresolution decomposition of an image f de nes four lters Ti , i = 1; : : : ; 4, which are represented in the same lexicographic form as f . Each lter is essentially a separable operator that de nes either a lowpass or a highpass lter on each image direction. The decimation in each direction is represented by the operator D. Thus, the decimation of an image vector in both directions is represented by the Kronecker product fD Dg of dimensionality N=4  N . According to the previous convention, the image-vector f is decomposed through the 2-D wavelet transform into four ltered and decimated (N=4  1) signals as [46]: 4 (D D)T f i = 1; : : : ; 4 f~i = (5.43) i The overall signal in the wavelet domain is formulated as: ~f =  f~1t f~22 f~3t f~4t  (5.44) Thus, the image decomposed in the wavelet domain can be equivalently expressed in the form of a multichannel image composed of four channels. The wavelet transform can be repeatedly applied to any subband, resulting in higher orders of multiresolution decomposition. Accordingly, the multichannel representation in (5.44) can be readily expanded to any resolution level. De ne the K -dimensional (K = 4) unit vector fek ; k = 1; : : : ; K g. The multiresolution signal can now be expressed in the compact form:

~f = X ek ~fk = X ek (D D)Tk f =4 Tf K

K

k=1

k=1

(5.45)

Because of the orthonormality of the wavelet basis, the transform matrix T is an orthonormal operator (Tt T = I). The wavelet transform on a matrix operator is similarly de ned. More speci cally, a matrix A is represented in the multiresolution domain by a block matrix [49]: A~ = TATt (5.46) whose mj -th block element is given by: A~mj = (D D)Tm ATtj (Dt Dt ) (5.47)

232

5. Color Image Enhancement and Restoration

Following the vector and matrix decompositions in the wavelet transform, it is readily proved that a multiplication Af in the image domain is equivalent ~ ~f in the wavelet domain. to the multiplication A The representation of images in the wavelet domain provides new insight into commonly used operators. One very important attribute of image processing operators is that of block circularity. This structure is derived under the assumptions of wide-sense stationarity of the image and noise, and the consideration of space-invariant operations. It is well known that a linear restoration operator in the image domain with block-circular (spaceinvariant) structure is transformed into a wavelet-domain operator with partially block circulant structure. Such a transformed operator functions almost independently in the di erent bands of the wavelet transform. Similarly, a block-circulant correlation matrix preserves little information regarding the cross-bands of the wavelet transform. Thus, the wide-sense stationarity assumption in the image space leads to loss of cross-band correlation in the wavelet domain. The formulation of the signal in the wavelet domain elucidates the implications of typically used assumptions in the signal domain regarding wide-sense stationarity processes and space-invariant operators. In addition, it provides the framework for the development of novel implementation schemes that relax unrealistic assumptions [46]. Towards this direction, partially blockcirculant operators can be utilized and the implementation of conventional algorithms can be developed directly in the wavelet domain. Such an implementation scheme has two advantages compared with the 2-D DFT implementation. First it replaces the wide-sense stationarity assumption in the image domain with a weaker assumption in the wavelet domain. The new assumption, namely of wide-sense stationarity within each band and each pair of bands, implies non-stationary image process in general. It provides better representation of the image's detailed structure and results in the reconstruction of sharper edges in the estimate. Second, it can implement non-stationary, signal-dependent, and space-variant operators that take under consideration the localized space-frequency characteristics of the image. In essence, the design of partially block-circulant operators in the wavelet domain relaxes unnecessary assumptions and can sustain additional information regarding the statistics of the signal and the structure of the degradation, as compared with conventional designs in the 2-D DFT domain. The implementation of single-channel algorithms designed in the wavelet domain is directly associated with the implementation of multichannel algorithms. The multichannel DFT operator W in (5.20) diagonalizes the blocks of a partially block-circulant operator such as (5.47). Thus, computations with such operators in the wavelet domain can be eciently performed in the multichannel DFT domain. It becomes evident that the extension of these issues to the design of multichannel restoration algorithms will provide powerful tools in relaxing assumptions used up to now and in exploiting

References

233

non-stationary and space varying multichannel correlation structures that are needed for e ective modeling and ecient algorithmic design and implementation. The study of multichannel algorithms in the wavelet domain is an area that is expected to receive important attention in the near future.

References 1. Woods, R. E., Gonzalez, R. C. (1981): Real-time digital image enhancement. Proceedings of the IEEE, 69, 634-654. 2. Bockstein, I. M. (1986): Color equalization method and its application to color image processing. Journal Optical Society of America, 3(5), 735-737. 3. Trahanias, P. E., Venetsanopoulos, A. N. (1992): Color image enhancement through 3-D histogram equalization. Proceedings of the 15th IARP International Conference on Pattern Recognition, 1, 545-548. 4. Faugeras, O. D. (1979): Digitl color image processing within the framework of a human visual model. IEEE Transaction on Acoustics, Speech and Signal Processing, 27(4), 380-393. 5. Weeks, A. R., Haque, G. E., Myler, H. R. (1995): Histogram equalization of 24-bit color images in color di erence (C-Y) color space. Journal of Electronic Imaging, 4(1), 15-22. 6. Strickland, R., Kim, C., McDonell, W. (1987): Digital color image enhancement based on the saturation component. Optical Engineering, 26, 609-616. 7. Trahanias, P.E., Pitas, I., Venetsanopoulos, A.N. (1994): Color Image Processing. (Advances In 2D and 3D Digital Processing: Techniques and Applications, edited by C.T. Leondes), Academic Press, N.Y.. 8. Jain, A.K. (1989): Fundamentals of Digital Image Processing. Prentice Hall, Englewood Cli s, New Jersey. 9. Kuan, D., Phipps, G., Hsueh, A.C. (1998): Autonomous robotic vehicle road following. IEEE Transaction on Pattern Analysis and Machine Intelligence 10(5): 648-658. 10. Holyer, R.J., Peckinpaugh, S.H. (1989): Edge detection applied to satellite imagery of the oceans. IEEE Transaction on Geoscience and Remote Sensing 27(1): 46-56. 11. Rignot, E., Chellappa, R. (1992): Segmentation of polarimetric synthetic aperture radar data. IEEE Transaction on Image Processing, 1(3): 281-299. 12. Robb, R.A. (ed) (1985): Three-Dimensional Biomedical Imaging. CRC Press, Boca Raton FL. 13. Mallat, S. G. (1989): A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transaction on Pattern Analysis and Machine Intelligence, 11(7): 674-693. 14. Mallat, S. G. (1989): Multifrequency channel decompositions of images and wavelet models. IEEE Transaction on Acoustics, Speech, and Signal Processing, 37(12): 2091-2110. 15. Zervakis, M. E. (1992): Optimal restoration of multichannel images based on constrained mean-square estimation. Journal Of Visual Communication and Image Representation, 3(4): 392-411. 16. Sadjadi, F. A. (1990): Perspective on techniques for enhancing speckled imagery. Optical Engineering, 29(1): 25-31. 17. Hunt, B. R., Kubler, O. (1984): Karhunen-Loeve multispectral image restoration, part I: Theory. IEEE Transaction on Acoustics, Speech, and Signal Processing, 32(3): 592-600.

234

5. Color Image Enhancement and Restoration

18. Galatsanos, N. P., Chin, R. T. (1991): Restoration of color images by multichannel Kalman ltering. IEEE Transaction on Signal Processing, 39(10): 2237-2252. 19. Angwin, D., Kaufman, H. (1987): Adaptive restoration of color images. Proceedings of the 26th Conference on Decision and Control, Los Angeles, CA. 20. Galatsanos, N. P., Chin, R. T. (1989): Digital restoration of multichannel images. IEEE Transaction on Acoustics, Speech, and Signal Processing, 37(3): 415-422. 21. Ozkan, M. K., Erdem, A. T., Sezan, M. I., Tekalp, A. M. (1992): Ecient multiframe Wiener restoration of blurred and noisy image sequences. IEEE Transaction on Image Processing 1: 453-476. 22. Tekalp, A. M., Pavlovic, G. (1990): Multichannel image modeling and Kalman ltering for multispectral image restoration. Signal Processing 19: 221-232. 23. Hunt, B. R. (1973): The application of constrained least squares estimation to image restoration by digital computer. IEEE Transaction on Computers, C-22(9). 24. Galatsanos, N. P., Katsaggelos, A.K. (1992): Methods for choosing the regularizing parameter and estimating the noise variance in image restoration and their relation. IEEE Transaction on Image Processing 1(3): 322-336. 25. Galatsanos, N. P., Kataggelos, A.K., Chin, R. T., Hillery, A. D. (1991): Least squares restoration of multichannel images. IEEE Transaction on 39(10): 22222236. 26. Tom, B.C.S., Lay, K. T., Katsaggelos, A. K. (1996): Multi channel image identi cation and restoration using the Expectation-Maximization algorithm. Optical Engineering, 35: 241-254. 27. Sezan, M. I., Trussel, H. J. (1991): Prototype image constraints for set-theoretic image restoration. IEEE Transaction on Signal Processing, 39(10): 2227-2285. 28. Katsaggelos, A. K., Lay, K. T., Galatsanos, N. P. (1993): A general framework for frequency domain multichannel signal processing. IEEE Transaction on Image Processing, 2: 417-420. 29. Schultz, R., Stevenson, R. (1995): Stochastic modeling and estimation of multispectral image data. IEEE Transaction on Image Processing, 4(8): 1109-1119. 30. Zervakis, M. E., Kwon, T. M. (1992): Robust estimation techniques in regularized image restoration. Optical Enginnering, 31(10). 31. Kassam, S. A., Poor, H. V. (1985): Robust techniques for signal processing: A survey. Proceedings of the IEEE, 73(3): 433-481. 32. Herbrt, T., Leahy, R. (1989): A generalized EM algorithm for 3-d Bayesian reconstruction from Poisson data using Gibbs priors. IEEE Transaction on Medical Imaging, 8(2). 33. Zervakis, M. E., Venetsanopoulos, A. N. (1990): M-Estimators in robust nonlinear image restoration. Optical Engineering, 29(5): 455-470. 34. Zervakis, M. E. (1996): Generalized maximum a posteriori processing of multichannel images and applications. Circuits Systems Signal Processing, 15(2): 233-260. 35. Green, P. J. (1990): Bayesian reconstruction from emission tomography data using the modi ed EM algorithm. IEEE Transaction on Medical Imaging, 9(1). 36. Stevenson, R., Delp, E. (1990): Fitting curves with discontinuities. Proceedings of the First International Workshop on Robust Computer Vision, Seattle, WA. 37. Tirakis,A., Delopoulos, A., Kollias, S. (1995): 2-D lter bank design of optimal reconstruction using limited subband information. IEEE Transaction on Image Processing, 4(8): 1160-1165.

References

235

38. Delopoulos, A., Kollias, S. (1996): Optimal lterbanks for reconstruction from noisy subband components. IEEE Transaction on Signal Processing, 44(2): 212224. 39. Katsaggelos, A. K. (ed.): (1991): Digital Image Restoration. Springer Verlag, New York, N. Y. 40. Zhu, W., Galatsanos, N. P., Katsaggelos, A.K. (1995): Regularized multichannel restoration using cross-validation. Graphical Models and Image Processing, 57: pp.38-54. 41. Chan, C.L., Katsaggelos, A. K., Sahakian, A. V. (1993): Image sequence ltering in quantum-limited noise with applications to low-dose uoroscopy. IEEE Transaction on Medical Imaging, 12: 610-621. 42. Han, Y. S., Herrington, D. H., Snyder, W. E. (1992): Quantitative angiography using mean eld annealing. Proceedings of Computers in Cardiology 1992, 1: 119-122. 43. Slump, C. H. (1992): Real time image restoration in diagnostic X-Ray imaging: The e ects on quantum noise. 11th IAPR International Conference on Pattern Recognition, 2: 693-696. 44. Zervakis, M. E., Katsaggelos, A. K., Kwon, T. M. (1995): A class of robust entropic functionals for image restoration. IEEE Transaction on Image Processing, 4: 752-773. 45. Schafer, R. W., Mersereau, R. M., Richards, M. A. (1981): Constrained iterative restoration algorithms. Proceedings of the IEEE, 69(4): 432-451. 46. Bouman, C., Sauer, K. (1993): A generalized Gaussian image model for the edge-preserving MAP estimation. IEEE Transaction on Image Processing, 2(3): 296-310. 47. Figueiredo, M. A., Leitao, J. M.M. (1994): Sequential and parallel image restoration: Neural networks implementations. IEEE Transaction on Image Processing 3: 789-801. 48. Daubechies, I. (1992): Ten Lectures on Wavelets. SIAM, Philadelphia, PA. 49. Zervakis, M. E., Kwuon, T. W., Yang, J-S. (1995): Multiresolution image restoration in the wavelet domain. IEEE Transaction on Circuits and Systems II, 42(9): 578-591.

236

5. Color Image Enhancement and Restoration

6. Color Image Segmentation

6.1 Introduction Image segmentation refers to partitioning an image into di erent regions that are homogeneous with respect to some image feature. Image segmentation is an important aspect of the human visual perception. Humans use their visual sense to e ortlessly partition their surrounding environment into di erent objects to help recognize them, guide their movements, and for almost every other task in their lives. It is a complex process that includes many interacting components that are involved with the analysis of color, shape, motion, and texture of objects in images. However, for the human visual system, the segmentation of images is a spontaneous, natural activity. Unfortunately it is not easy to create arti cial algorithms whose performance is comparable to that of the human visual system. One of the major obstacles to the successful development of theories of segmentation has been a tendency to underestimate the complexity of the problem exactly because the human performance is mediated by methods which are largely subconscious. Because of this, segmentation of images is weakened by various types of uncertainty making most simple segmentation techniques ine ective [1]. Image segmentation is usually the rst task of any image analysis process. All subsequent tasks, such as feature extraction and object recognition rely heavily on the quality of the segmentation. Without a good segmentation algorithm an object may never be recognizable. Over-segmenting an image will split an object into di erent regions while under-segmenting it will group various objects into one region. In this way, the segmentation step determines the eventual success or failure of the analysis. For this reason, considerable care is taken to improve the probability of successful segmentation. Emerging applications, such as multimedia databases, digital photography and web-based visual data processing generated a renewed interest on image segmentation, so that the eld has become an active area of research not only in engineering and computer science but also in other academic disciplines, such as geography, medical imaging, criminal justice, and remote sensing. Image segmentation has taken a central place in numerous applications, including, but not limited to, multimedia databases, color image and video transmission over the Internet, digital broadcasting, interactive TV, video-on-demand, computer-based training, distance education, video-

238

6. Color Image Segmentation

conferencing and tele-medicine, and with the development of the hardware and communications infrastructure to support visual applications. Many reasons can be cited for the success of the eld. There is a strong underlying analytical framework based on mathematics, statistics, and physics. Thus, well-founded, robust algorithms that eventually lead to consumer applications can be designed. The eld has also been helped tremendously by the advances in computer and memory technology, enabling faster processing of images, as well as in scanning and display. Most attention on image segmentation has been focused on gray scale (monochrome) image segmentation. A common problem in segmentation of a gray scale image occurs when an image has a background of varying gray-level such as gradually changing shades, or when regions assume some broad range of gray-levels. This problem is inherent since intensity is the only available information from monochrome images. It is known that the human eye can detect only in the neighborhood of one or two dozen intensity levels at any point in a complex image due to brightness adaptation, but can di erentiate thousands of color shades and intensities [2]. This paper surveys the existing techniques (past and present) of color image segmentation. The techniques are reviewed in six major classes: pixelbased, edge-based, region-based, model-based, physics-based, and hybridbased techniques. There have been a number of image segmentation survey papers published [3{5], and [6], but either they don't consider color image segmentation techniques or they don't survey any modern segmentation techniques. Because of the uncertainty problems encountered while trying to model the human visual system, there are currently a large number of image segmentation techniques available. However, no general methods have been found that perform adequately across a varied set of images. The early attempts at gray scale image segmentation are based on three techniques:  Pixel-based techniques  Region-based techniques  Edge-based techniques Even though these techniques were introduced three decades ago, they still nd great attention in color image segmentation research today. Three of the major techniques that have been recently introduced include motion-based, physics-based, and model-based color image segmentation techniques. The following sections will survey the various techniques of color image segmentation starting with pixel-based techniques, edge-based, region-based, modelbased, physics-based, and the last section surveying hybrid-based techniques. The nal section describes the applicability of a speci c region-based color image segmentation technique.

6.2 Pixel-based Techniques

239

6.2 Pixel-based Techniques Pixel-based techniques do not consider the spatial context but decide solely on the basis of the color features at individual pixels. This attribute has its advantages and disadvantages. Simplicity of the algorithms are an advantage to pixel-based techniques while lack of spatial constraints make them susceptible to noise in the images. Model-based techniques which utilize spatial interaction models to model images are used to further improve pixel-based techniques.

6.2.1 Histogram Thresholding The simplest technique of pixel-based segmentation is histogram thresholding . It is one of the oldest and most popular techniques. If an image is composed of distinct regions, the color histogram of the image usually shows di erent peaks, each corresponding to one region and adjacent peaks are likely to be separated by a valley. For example, if the image has a distinct object on a background, the color histogram is likely to be bimodal with a deep valley. In this case, the bottom of the valley is taken as the threshold so that pixels that belong above and below this value on the histogram are grouped into di erent regions. This is called bi-level thresholding [3]. For multi-thresholding the image is composed of a set of distinct regions. In this case, the histogram has one or more deep valleys and the selection of the thresholds becomes easy because it becomes a problem of detecting valleys. However, normally, detection of the valleys is not a trivial job. A method of color image segmentation which incorporated the histograms of nine color features from the three color spaces RGB, HSI, and YIQ: three from each color space was proposed in [9]. The most dominant peak of the nine histograms determines the intervals of the subregion. Pixels falling in this interval create one region and pixels falling out of it other ones. The dominant peak selection is driven by a priority list of seven. The general segmentation algorithm consists of the following steps: 1. Put the image domain into the initially empty region list. 2. Determine the nine histograms of the region being considered. The entire image is the original region. 3. Locate all peaks in the set of histograms. 4. Select the best peak in the list of peaks using the priority list of seven. If none, then output this uniform region and go to step 2. 5. Determine and apply threshold. 6. Regions produced are then added to the list of region. 7. Go to step 2. Additional processing, such as removal of small regions and addition of textural features, e.g. density of edges, can be used to improve the performance of the basic method.

240

6. Color Image Segmentation

Another attempt to derive a set of e ective color features by systematic experiments of region segmentation is presented in [10]. This method is based on the fact that the color feature which has the deep valleys on its histogram and has the largest discriminant power to separate the clusters in a given region need not be the R, G, and B color features. Since a feature is said to have large discriminant power if its variance is large, color features with large discriminant power were derived by the Karhunen-Loeve (KL) transformation. At every step of segmenting a region, calculation of the new color features is done for the pixels in that region by the KL transform of R, G, and B data. Based on extensive experiments, it was found in [10] that the following three color features constitute an e ective set of features for segmentation:

I 1 = (R + G3 + B ) I 2 = (R ; B ) I 3 = (2G ; 2R ; B )

(6.1) (6.2) (6.3)

The proposed color features were compared with the RGB, XYZ, YIQ, L a b and HSI color primaries. Results reported in [10] indicated that the I1I2I3 color space has only a slight advantage over the other seven color spaces. However, according to [10] the I1I2I3 space should be selected because of the simplicity of transforming to this space form the RGB color space. The opponent color space representation of I1I2I3 has also been used for segmentation purposes in [11]. A model of the human visual system is introduced in [11] and is used as a preprocessor for scene analysis. The proposed human visual system yields a pair of opponent colors as a two-dimensional feature for further scene analysis. A rough segmentation can be performed only upon the base of the 2-D histogram of the opponent colors. The procedure starts by transforming the RGB values to the opponent color pairs red-green (RG), yellow-blue (Y B ), and the intensity feature (I ). Then the three channels are smoothed by applying band-pass lters. The center frequencies of the lters dispose of the proportion I : RG : Y B = 4 : 2 : 1 so that the intensity channel shows the strongest high-pass character which puts emphasis on the edges of the image. Then peaks and at levels in the 2-D RG ; Y B histogram are searched for. These peaks and at points determine the areas in the RG ; Y B plane. Pixels falling into one of these areas create one region and pixels falling into another area create another region. Although this method leaves some non-attachable pixels in the image, it was argued in [11] that the proposed technique is superior to the methodologies suggested in [9] and [10]. The opponent color based methodology can be improved further by merging pixels that are not attached to a region [12]. Spatial neighborhood relations are used for the merging criterion. The improvements consists of an additional re nement process that is employed to the segmentation results

6.2 Pixel-based Techniques

241

obtained in [11]. If one or more pixels of the eight neighbors of a non-assigned pixels are assigned to the same region, the non-assigned pixel is marked for assignment to this region. Nothing is done if none of the neighborhood pixels are assigned or if several pixels in the neighborhood belong to di erent regions. After the entire image is scanned the marked pixels are assigned to the corresponding regions. This procedure is applied ve times to the intermediate results. According to [12] while 30% to 80% of the pixels in an image are assigned to regions when employing the approach of [11], less than 10% are not assigned to regions when using the modi ed algorithm. Another segmentation technique based on histogram thresholding is the one suggested in [13]. This technique attempts to detect the peaks of the three histograms in the hue, value, and chroma (HVC) components of the Munsell color space. Since no analytical formula exists for the transformation from the CIE standard system to the Munsell system, conversion is based on a table [8]. The segmentation algorithm of [13] consists of the following steps: 1. The histogram, of the region under consideration, is computed for each of the color features (HVC). Initially the entire image is regarded as the region. The histograms are smoothed by an average operator. 2. The most dominant peak in either of the three histograms is found. The peak selection is based on the shape analysis of each peak under consideration. First, some clear peaks are selected. Next, the following criterion function is calculated for each candidate peak: f = Sp 100 (6.4)

Ta Fp

where Sp denotes a peak area between two valleys, Fp is the full-width at half the maximum of the peak, and Ta is the total number of pixels in the speci ed region; the area of the histogram. 3. The two thresholds, one on each side, of the most dominant peak of the three histograms are found. Applying the thresholds, partitions the region into two sets of subregions: one consists of subregions corresponding to the color attributes within the threshold limits, and the other is a set of subregions with the remaining attribute values. 4. The threshold process is repeated for the extracted subregions. If all the histograms become mono-model, a suitable label is assigned to the latest extracted subregions. 5. Steps 1 through 4 are repeated for the remaining regions. The segmentation is terminated when the areas of the regions are suciently small in comparison the original image size or no histogram has signi cant peaks. The remaining pixels which have not been assigned to a region are merged into the neighboring regions of similar color. In summary, histogram thresholding is one of the simplest methods of image segmentation. This attribute lends it great consideration in current

242

6. Color Image Segmentation

segmentation research when a rough segmentation of the image is needed. Many of the current image and video database systems [14] employ histogram thresholding for image segmentation in image and video retrieval. A common drawback of histogram thresholding is that it often produces unsatisfactory segmentation results on color images of natural scenes.

6.2.2 Clustering Clustering is another pixel-based technique that is extensively used for image segmentation. The rationale of the clustering technique is that, typically, the colors in an image tend to form clusters in the histogram, one for each object in the image. In the clustering-based technique, a histogram is rst obtained by the color values at all pixels and the shape of each cluster is found. Then, each pixel in the image is assigned to the cluster that is closest to the pixel color. Many di erent clustering algorithms are in existence today [15], [16]. Among these, the K-means and the fuzzy K-means algorithms have received extensive attention [21{23,27,28], and [29]. Clustering techniques can be combined with histogram thresholding approaches. In [17] the histogram thresholding method of [13] was extended to include clustering. The method consists of two steps. The rst step is a modi cation of the algorithm introduced in [13] and reviewed in the previous section (Section 6.2.1). The modi cation consists of computing the principal components axes in the CIE L a b color space for every region to be segmented. In other words, the color features have been transformed onto the principal component axes. Peaks and valleys are searched for in the three 1-D histograms of the three coordinate axis. The second step is a reclassi cation of the pixels on a color distance measure. Suppose a set of K representative colors fm1; m2; : : : ; mkg are extracted from the image. The rst cluster center a1 in the color space is chosen as a1 = m1. Next, the color di erence from m2 to a1 is computed. If this di erence exceeds a given threshold T , a new cluster center a2 is created as a2 = m2. Otherwise m2 is assigned to the domain of the class a1 . In a similar fashion, the color di erence from each representative color (m3; m4; : : :) to every established cluster center is computed and thresholded. A new cluster is created if all of these distances exceed T , otherwise the color is assigned to the class to which it is closest. A method of detecting clusters by tting to them some circular-cylindrical decision elements in the CIE L a b uniform color coordinate system was proposed in [18] and [19]. This estimates the clusters' color distributions without imposing any constraints on their forms. Boundaries of the decision elements are formed with constant lightness and constant chromaticity loci. Each boundary is obtained using only 1-D histograms of the L*H C* cylindrical coordinates of the image data. The Fisher linear discriminant method [68] is then used to simultaneously project the detected color clusters onto a line. For two clusters w1 and w2 the Fisher line W is given by:

6.2 Pixel-based Techniques

243

W = (K1 + K2 );1 (M1 ; M2 ) (6.5) where (K1 ; K2) and (M1 ; M2 ) are the covariance matrices and the mean vec-

tors, respectively, of the two clusters. The color vectors of the image points, which are the elements of clusters w1 and w2 , are then projected onto this line using the equation d(C ) = W T C , where C is a color vector in one of the clusters and the linear discriminant function. The 1-D histogram is calculated for the projected data points and thresholds are determined by the peaks and valleys in the histogram. Projecting the estimated color clusters onto a line permits utilization of all the property values of clusters for segmentation and inherently recognizes their respective cross correlation. This way, the region acceptance is not limited to the information available from one color component. Which gives the method an advantage over the multidimensional histogram thresholding techniques presented in Section 6.2.1. Recently a color segmentation algorithm that uses the watershed algorithm to segment the 3-D color histogram of an image was proposed in [69]. An explanation of the morphological watershed transform can be found in [67]. The L u v color space is utilized for the development of the algorithm. The non-linearity of the transformation from the RGB space to the L u v space transforms the homogeneous noise in the RGB space to inhomogeneous noise. Even if the RGB data is smoothed prior to the transformation, due to the non-linearity of the transform, any small residual amount of noise may be signi cantly ampli ed. To this end, an adaptive lter is employed. The lter removes noise from a 3-D color histogram in the L u v color space with subsequent perceptual coarsening. The algorithm is as follows: 1. Calculate the color histogram of the image. 2. Filter it for noise reduction. 3. Perform perceptual coarsening. 4. Perform clustering using the watershed algorithm in the L u v color space. A new segmentation algorithm for color images based on mathematical morphology has been recently presented in [71]. The algorithm employs the scheme of thresholding the di erence of two Gaussian smoothed 3-D histograms, that di er only in the standard deviation used, to get the initial seeds for clustering, and then uses a closing operation and adaptive dilation to extract the number of clusters and their representative values, and to include the suppressed bins during Gaussian smoothing, without a priori knowledge of the image. Through experimentation on various color spaces, such as the RGB, XYZ, YIQ, and I1I2I3, it was concluded that the proposed algorithm yields almost identical segmentation results in any color space. In other words, the algorithm works independently of the choice of color space. Among the most popular clustering algorithms in existence today [15], [16], the K-means and the fuzzy K-means algorithms have received extensive

244

6. Color Image Segmentation

attention [21{23,27,28], and [29]. A survey of segmentation techniques that utilize these clustering algorithms will be presented next. K-means algorithm. The K-means algorithm for cluster-seeking is based on the minimization of a performance index which is de ned as the sum of the squared distances from all points in a cluster domain to the cluster center. This algorithm consists of the following steps [16]: 1. Determine or choose K initial cluster centers c1 (1); c2 (1); : : : ; cK (1). Here c1 (1) is the color features of the rst cluster center during the rst iteration. 2. At the kth iteration each pixel a is assigned to one of the K clusters C1 (k); : : : ; CK (k), where Cj (k) denotes the set of pixels whose cluster center is cj (1). a is assigned to cluster Cj (k) if:

ka ; cj (k)k  ka ; ci (k)k 8i; j = 1; 2; : : :; K; i 6= j

(6.6) (6.7) 3. From the results of step 2, compute the new cluster centers cj (k + 1); j = 1; 2; : : : ; K, such that the sum of the squared distances from all points in Cj (k) to the new cluster center is minimized. In other words, the new cluster center cj (k + 1) is computed so that the performance index

Jj =

X

a2C (k)

ka ; cj (k + 1)k2 ; j = 1; 2; : : : ; K

(6.8)

j

is minimized. The new cluster center which minimizes this is the sample mean of Cj (k). Therefore, the new cluster center is given by:

X cj (k + 1) = N1 a; j = 1; 2; : : : ; K j a2C (k)

(6.9)

j

where Nj is the number of pixels of cluster Cj (k). 4. If cj (k + 1) = cj (k) for j = 1; 2; : : :; K , the algorithm has converged and the procedure is terminated. Otherwise, go to step 2. The determination of the initial cluster centers plays a crucial part because the better the initial partition is, the faster the algorithm will converge. A comparison between the K-means clustering algorithm technique of color image segmentation and a region-based technique is given in [21]. The two algorithms were compared in color spaces, such as the RGB, XYZ, HSI, L a b , and the I1I2I3 color space of [10]. In the clustering algorithm the initial cluster centers of the image were determined by rst generating the mdimensional histogram of the image and then determining the dominant peaks in the histogram. The K initial cluster centers correspond to K dominant peaks in the histogram. Their results showed that the K-means clustering method didn't perform as well as the region-based technique.

6.2 Pixel-based Techniques

245

Similarly, a segmentation method which uses the K-means algorithm to locate clusters within the HSI color space was proposed in [22]. The hue, saturation and luminance components of the image are determined and used to form a three-dimensional vector that represents the color of any pixel within the image. The algorithm treats each color within the image simply as a threedimensional vector. K initial cluster centers are initially chosen at random. K-means clustering is then implemented with the Euclidean distance, in the three-dimensional vector space, as a metric to distribute the pixels (Step 2 above). A modi ed algorithm can be obtained by separately segmenting the hue feature followed by segmentation of the two-dimensional saturation and luminance features. This approach biases the segmentation process towards the hue color value. However, the selection of the hue component is based on the fact that hue corresponds well to human visual perception. A parallel K-means clustering algorithm to track the centroids of clusters formed from moving objects in a sequence of colored images was proposed in [23]. The resulting tracking algorithm is robust with respect to shape variations and partial occlusions of the objects. Fuzzy K-means algorithm. The K-means algorithm discussed in the previous section can be extended to include fuzzy inference rules. The so-called fuzzy K-means algorithm , which is also referred to as the fuzzy c-means algorithm, was rst generalized in [25], [26]. The algorithm uses an iterative optimization of an objective function based on a weighted similarity measure between the pixels in the image and each of the K cluster centers. A local extremum of this objective function indicates an 'optimal' clustering of the input data. The objective function that is minimized is given by:

Jm (U; v) =

n X K X

k=1 i=1

(ik )m (dik )2

(6.10)

where ik is the fuzzy membership value of pixel k in cluster center i, dik is any inner product induced norm metric (i.e. the Euclidean norm), m varies the nature of clustering with hard clustering at m = 1 and increasingly fuzzier clustering at higher values of m, v is the set of K cluster centers and U is the fuzzy K -partition of the image. The algorithm relies on the appropriate choices of U and v to minimize the objective function given above. The minimization of the objective function can also be done in an iterative fashion [27]. For the given set of data points x1 ; x2 ;   ; xn : 1. Fix the number of clusters K , 2  K < n, where n is the number of pixels. Fix m; 1  m < 1. Choose any inner product induced norm metric k  k. 2. Initialize the fuzzy K-partition, U (b) 2 all possible fuzzy partitions, with b = 0 initially.

246

6. Color Image Segmentation

3. Calculate the K cluster centers fvib g with U (b) and: Pn ( )mx vi = Pk=n1 (ik )m k ; i = 1; : : : ; c: k=1 ik ( b ) 4. Update U . Let dik = kxk ; vi k: if dik 6= 0, 1   = ik

PK  dik 2=(m;1)

(6.11) (6.12)

j =1 djk

else, ik = 0. 5. Compare U (b) and U (b+1) in a matrix norm: if kU (b) ; U (b+1) k  ", stop; otherwise, set b = b + 1 and return to step 3. There are a number of parameters that need to be set in the system before the algorithm can be used. These are: K; m; ", U (0) , the inner product induced norm metric, and the number of items in the data set n. Due to the large amount of data items n being processed at any one time, a randomly chosen training subset of pixels taken form the input picture can be initially clustered [27] . An arbitrary number of initial clusters can also be used in the beginning of the segmentation process. The cluster center of the training set are used to calculate membership functions for all of the pixels in the image using (6.12) above. These membership functions are examined and any pixel with a membership above a pre-de ned threshold, called an -cut, is assigned to the feature space cluster of that membership function. All of the pixels that remain are put back into the algorithm and the process is repeated until either all or a pre-determined amount of the pixels are identi ed as belonging to the clusters that were found during each iteration. Experiments were done in both the RGB and the I1I2I3 color spaces. It was suggested in [27] that the di erence in results between the two is minimal. This type of algorithm will produce spherical or ellipsoidal shaped clusters in the feature space parallelizing the human visual color matching for constant chromaticity that has been shown to follow the spherical or ellipsoidal shaped cluster pattern [27]. In [28] a segmentation algorithm for aerial images that utilizes the fuzzy clustering principle was proposed. The method employs region growing concepts and pyramidal data structure for hierarchical analysis. Segmentation of the image at a particular processing level is done by the fuzzy K-means algorithm. Four values are replaced by their mean value to construct a higher level in the pyramid. Starting from the highest level, regions are created by pixels that have their fuzzy membership value above -cut. If the homogeneity test fails, regions are split to form the next level regions which are again subjected to the fuzzy K-means algorithm. This algorithm is a region splitting algorithm. A color image segmentation algorithm based upon histogram thresholding and fuzzy K-means techniques was proposed in [29]. The segmentation

6.3 Region-based Techniques

247

technique can be considered as a kind of coarse to ne technique. The strategy was adopted to reduce the computational complexity required for the fuzzy K-means algorithm. The coarse segmentation stage attempts to segment coarsely by using histogram scale space analysis [30], [31]. This analysis enables reliable detection of dominant peaks in the given histogram and the intervals around those peaks. The bounds of the intervals are found as zerocrossings of the second derivative for a  -scaled version of the histogram. The  -scaling of the histogram h(x) is de ned by the convolution of h with a Gaussian function which has a mean of zero and the standard deviation equal to  . The second derivative of the scaled function can be computed by the convolution with the second derivative of the Gaussian function. Those pixels which are not segmented by the coarse segmentation are further segmented using the fuzzy K-means algorithm, proposed by Bezdek [25], [26], in the ne segmentation stage with the pre-determined clusters. In [29] di erent color spaces, such as the RGB, XYZ, YIQ U*V*W*, and the I1I2I3 were utilized It is widely recognized that the clustering technique to image segmentation su er from problems related to: (i) adjacent clusters frequently overlap in color space, causing incorrect pixel classi cation, and, (ii) clustering is more dicult when the number of clusters is unknown, as is typical for segmentation algorithms [29]. The pixel-based segmentation techniques surveyed in this section do not consider spatial constraints which make them susceptible to noise in the images. The resulting segmentation often contains isolated, small regions that are not present in noise-free images. In the past decade, many researchers have included spatial constraints in their pixel-based segmentation techniques using statistical models. These techniques will be surveyed in the model-based segmentation techniques section (Sect. 6.5).

6.3 Region-based Techniques Region-based techniques focus on the continuity of a region in the image. Segmenting an image into regions is directly accomplished through regionbased segmentation which makes it one of the most popular techniques used today [33]. Unlike the pixel-based techniques, region-based techniques consider both color distribution in color space and spatial constraints. Standard techniques include region growing and split and merge techniques. Region growing is the process of grouping neighboring pixels or a collection of pixels of similar properties into larger regions. The split and merge technique constitutes iteratively splitting the image into smaller and smaller regions and testing to see if adjacent regions need to be merged into one. The process of merging pixels or regions to produce larger regions is usually governed by a homogeneity criterion, such as the distance measures discussed in Chap. 2.

248

6. Color Image Segmentation

6.3.1 Region Growing Region growing is the process of grouping neighboring pixels or a collection of pixels of similar properties into larger regions. Testing for similarity is usually achieved through a homogeneity criterion. Quite often after an image is segmented into regions using a region growing algorithm regions are furthered merged for improved results. A region growing algorithm typically starts with a number of seed pixels in an image and from these grows regions by iteratively adding unassigned neighboring pixels that satisfy some homogeneity criterion with the existing region of the seed pixel. That is, an unassigned pixel neighboring a region, that started from a seed pixel, may be assigned to that region if it satis es some homogeneity criterion. If the pixel is assigned to the region, the pixel set of the region is updated to include this pixel. Region growing techniques di er in choice of homogeneity criterion and choice of seed pixels. Several homogeneity criteria linked to color similarity or spatial similarity can be used to analyze if a pixel belongs to a region. These criteria can be de ned from local, regional, or global considerations. The choice of seed pixel can be supervised or un-supervised. In a supervised method the user chooses the seed pixels while in an un-supervised method choice is made by the algorithm. In [34] a region growing segmentation algorithm was compared against an edge detection algorithm and a split and merge algorithm. The algorithms were tested in the RGB, YIQ, HLS (hue, saturation, and brightness), and L*a*b* color spaces. The region growing algorithm of [34] is a supervised one where the seed pixels and threshold values are chosen by a user. The Euclidean distance in color space was used to determine which pixels in the image satisfy the homogeneity condition. If the color of the seed pixel is given as S = (s1 ; s2 ; s3 ) and the color of a pixel in consideration is P = (p1 ; p2 ; p3 ), all pixels which satisfy (s1 ; p1 )2 + (s2 ; p2 )2 + (s3 ; p3 )2 < T 2 (6.13) would be included in the region. Here T is the threshold value which is chosen by the user. The algorithm can be summarized with the following steps: 1. Choose next seed pixel. This seed pixel is the rst pixel of the region. 2. Test to see if the four neighboring pixels (vertical and horizontal neighbors) of the pixel belong to the region with condition (6.13). 3. If any of the four neighboring pixels satisfy the condition, they are assigned to the region and step 2 is repeated and their four neighbors are considered and tested for homogeneity. 4. When the region is grown to its maximum (e.g. no neighbors of the pixels on the edge of the region satisfy (6.13)), go to step 1. It was found in [34] that the region growing algorithm performed best in the HLS and L*a*b* color spaces. The authors of the study also suggest that

6.3 Region-based Techniques

249

instead of comparing the unassigned pixel to the seed pixel, to compare it to the mean color of the set of pixels already assigned to the region. Every time a pixel is assigned to the region the mean value is updated. They never conducted any experiments with this new homogeneity criterion. Another color segmentation algorithm which combines region growing and region merging processes was recently proposed in [35]. This algorithm starts with the region growing process which is based on three homogeneity criteria that take into account color similarity and spatial proximity. The resulting regions are then merged on the basis of a homogeneity criterion that takes into account only color similarity. The three criteria they used for the region growing approach include: 1. a local homogeneity criterion, which corresponds to a local comparison between adjacent pixels; 2. a rst average homogeneity criterion, which corresponds to a local and regional comparison between a pixel and its neighborhood, considering only the region under study; 3. a second average homogeneity criterion, which corresponds to a global and regional comparison between a pixel and the studied region. For a visual point of view, the authors in [35] consider that regions which present similar color properties belong to the same class, even if they are spatially disconnected. Consequently, these regions are merged using a global homogeneity criterion which corresponds to a global comparison of the average color features representative of the two regions under study. They have also considered that regions which are spatially dispersed in the image, such as details, edges, or high-frequency noise have to be merged to the other regions either locally pixel by pixel, or globally. All color comparisons are accomplished using the Euclidean distance measure in the RGB color space. Threshold values are computed according to an adaptive process relative to the color distribution of the image. Finally, it was suggested in [35] that the algorithm listed there can be extended to other uniform color spaces but new thresholds have to be de ned. A graph-theoretic approach to the problem of color image segmentation was proposed in [36]. The algorithm is based on region growing in the RGB and L*a*b* color spaces using the Euclidean distance metric to measure the color similarity between pixels. The suppression of arti cial contouring is formulated as a dual graph-theoretic problem. A hierarchical classi cation of contours is obtained which facilitates the elimination of the undesirable contours. Regions are represented by vertices in the graph and links between geometrically adjacent regions have weights that are proportional to the color distance between the regions they connect. The link with the smallest weight determines the regions to be merged. At the next iteration of the algorithm the weights of all the links that are connected to a new region are recomputed before the minimum weight link is selected. The links chosen in this way

250

6. Color Image Segmentation

de ne a spanning tree on the original graph and the order in which links are chosen de nes a hierarchy of image representations. Results presented in [36] suggested that no clear advantage was gained through the utilization of the L*a*b* color space.

6.3.2 Split and Merge As opposed to the region growing technique of segmentation where a region is grown from a seed pixel, the split and merge technique subdivides an image initially into a set of arbitrary, disjointed regions and then merge and/or split the regions in an attempt to satisfy a homogeneity criterion between the regions. In [2] a split and merge algorithm that iteratively works toward satisfying these constraints was presented. The authors describe the split and merge algorithm initially proposed in [37]. The image is subdivided into smaller and smaller quadrant regions so that for each region a homogeneity criterion holds. That is, if for region Ri the homogeneity criterion does not hold, divide the region into four sub-quadrant regions, and so on. This splitting technique may be represented in the form of a so-called quad-tree. The quadtree data structure is the most common used data structure in split and merge algorithms because of its simplicity and computational eciency [38]. A split arti cial image and the corresponding quad-tree is shown in Fig. 6.1 and 6.2, respectively. Note that the root of the tree corresponds to the entire image. Merging of adjacent sub-quadrant regions is allowed if they satisfy a homogeneity criterion. The procedure may be summarized as: 1. Split into four disjointed quadrants any region where a homogeneity criterion does not hold. 2. Merge any adjacent regions that satisfy a homogeneity criterion. 3. Stop when no further merging or splitting is possible. Most split and merge approaches to image segmentation follow this simple procedure with varying approaches coming from di erent color homogeneity criteria.

R1 R 31 R 32 R 33 R 34

R

R2 R1

R2

R3

R4

R4

Fig. 6.1. Partitioned image

R 31 R 32 R 33 R 34

Fig. 6.2. Corresponding quad-tree

6.3 Region-based Techniques

251

As mentioned in the previous section (Sect. 6.3.1), in [34] the authors compared a split and merge segmentation algorithm against an edge detection algorithm and a region growing algorithm. They tested all algorithms in the RGB, YIQ, HLS (hue, saturation, and brightness), and L*a*b* color spaces. They used statistical properties of the image regions to determine when to split and when to merge. They used the trace of the covariance matrix for a region to determine how homogeneous a given region is. If the mean color in a region with n pixels is:

M = (m1 ; m2 ; m3 ) =

P c1i P c2i P c3i  i ; i ; i n n n

(6.14)

where (m1 ; m2 ; m3 ) and (c1 ; c2 ; c3 ) representing the three color features of the mean of the region and of a pixel, respectively, then the trace of the covariance matrix is equal to: T = v11 + v22 + v33 =

X

X

i

i

(c1i ; m1 )2 +

(c2i ; m2 )2 +

X i

!

(c3i ; m3 )2 =n:

(6.15)

If the trace is above a user speci ed threshold, the region is recursively split. Otherwise, the rectangular region is added to a list of regions to be subsequently merged. Two statistical measures for merging regions were employed in [34]. The rst is based on the trace of the covariance matrix of the merged region. This value is calculated for the two regions that are being considered. If this value is below the speci ed threshold, then the two regions are merged. Otherwise, they are not. The second method considers the Euclidean color distance between the means of the two regions to be merged. As with their region growing method, the two regions are merged when this distance is below the speci ed threshold and not otherwise. As mentioned in Sect. 6.2.2, the authors in [21] had compared the quadtree split and merge algorithm to the K-means clustering algorithm. They compared the two algorithms in seven color spaces. They tested the quad-tree split and merge algorithm explained above with two homogeneity criteria: (i) a homogeneity criterion based on functional approximation and (ii) the mean and variance homogeneity criterion. The functional approximating criterion assumes that the color over a region may either be constant or variable due to intensity changes caused by shadows and surface curvatures. They used low-order bivariate polynomial approximating functions as the set of approximating functions, because these functions detect useful information, such as abrupt changes in the color features, relatively well and ignore misleading information, such as changes in intensity caused by shadows and surface curvature, when the order is not too high. The set of low-order polynomials can be written as:

252

6. Color Image Segmentation

fm (x; y) =

X

i+j m

aij xi yj ; m  4

(6.16)

Using the above formula, the planar polynomial is obtained by m = 1, and the bi-quadratic polynomial by m = 2. The vector a is calculated with a least-square solver. The tting error  for a region R is:

v u X (g(x; y) ; fm(x; y))2 t =u n x;y2R

(6.17)

where n is the number of pixels in R and g(x; y) is the pixel value at coordinates (x; y). The tting error  is compared to the mean noise variance in the region and is considered homogeneous if it is less than this value. The mean and variance homogeneity criterion assumes that the color of the pixels, discarding noise, over a region is constant and is based on the mean and variance of a region, which is the case for m = 0. That is, f0 = a00 : The tting error  is calculated and compared to the mean noise variance of the region, as before. They found that the split and merge method of image segmentation outperforms the K-means clustering method. A major drawback of quad-tree-structured split and merge algorithms is their inability to adjust their tessellation to the underlying structure of the image data because of the rigid rectilinear nature of the quad-tree structure. In [39] an image segmentation algorithm to reduce this drawback was introduced. The proposed split and merge algorithm employs the incremental Delaunay triangulation as a directed region partitioning technique which adjusts the image tessellation to the semantics of the image. A Delaunay triangulation of a set of points is a triangulation in which the circum-circle of any of its triangles does not contain any other point in its interior [40]. The homogeneity criterion is the same used in [21]. Region-based techniques of image segmentation are very common today because of their simplicity and computational simplicity. This lends them great attention when hybrid segmentation techniques are created. Region-based techniques are often mixed with other techniques, such as edge detection. These hybrid techniques will be described in Section 6.7.

6.4 Edge-based Techniques Edge-based segmentation techniques focus on the discontinuity of a region in the image. The color edge detection techniques discussed in Chap. 4 are being used today for image segmentation purposes. Once the edges within an image have been identi ed, the image can be segmented into di erent regions

6.5 Model-based Techniques

253

based upon these edges. A disadvantage with edge-based techniques is their sensitivity to noise .

6.5 Model-based Techniques Recently, much work has been directed toward stochastic model-based segmentation techniques [42,50,52], and [55]. In such techniques, the image regions are modeled as random elds and the segmentation problem is posed as a statistical optimization problem. Compared to previous techniques, the stochastic model-based techniques often provide more precise characterization of the image regions. In fact, various stochastic models can be used to synthesize color textures that closely resemble natural color textures in realworld natural images [43]. This characteristic, along with the optimization formulation, provides better segmentation when the image regions are complex and otherwise dicult to discriminate by simple low-order techniques. Most of the techniques introduced use the spatial interaction models like Markov random eld (MRF) or Gibbs random eld (GRF) to model digital images. Although interest in MRF models for tackling image processing problems can be traced back to [41], only recently have the applicable mathematical tools for exploitation of the full power of MRF in image segmentation found their way into image processing literature. Research methodologies reported in [43{49,52,53], and [54] all make use of the Gibbs distributions for characterizing MRF. Stochastic model-based color image segmentation techniques can be either supervised or un-supervised. In a supervised segmentation approach, the model parameters are obtained from training data, whereas in an unsupervised segmentation approach, the model parameters have to be estimated directly from the observed color image. Therefore, the unsupervised segmentation problem can be considered as a model tting problem where a random eld model is tted to an observed image. The unsupervised approach is often necessary in many practical applications where training data is not available, for example when only one image is available. In [49], the authors developed an unsupervised segmentation algorithm which uses Markov random led models for color textures. These models characterize a texture in terms of spatial interaction within each color plane and interaction between di erent color planes. The algorithm consists of a region splitting phase and an agglomerative clustering phase and is performed in the RGB color space. In the region splitting phase, the image is partitioned into a number of square regions that are recursively split until each region satis es a homogeneity criterion. The agglomerative clustering phase is divided into a conservative merging process followed by a stepwise optimal merging process. Conservative merging uses color mean and covariance estimates for the ecient processing of local merges. After the conservative merging is the stepwise optimal merging process that at each iteration maximizes a

254

6. Color Image Segmentation

global performance functional based on the conditional pseudo-likelihood of the color image. The stepwise optimal merging process is stopped using a test based on rapid changes in the pseudo-likelihood of the image. In [51] the maximum posteriori (MAP) probability approach to image segmentation was introduced. The main points of the approach are presented in the following sections.

6.5.1 The Maximum A-posteriori Method The maximum a-posteriori probability (MAP) approach is motivated by the desire to obtain a segmentation that is spatially connected and robust in the presence of noise in the image. The MAP criterion functional consists of two parts, the class conditional probability distribution, which is characterized by a model that relates the segmentation to the data, and prior probability distribution, which expresses the prior expectations about the resulting segmentation . In [47] the authors have proposed a MAP approach of the segmentation of monochromatic images, and have successfully used GRF's as a-priori probability models for the segmentation of labels. The GRF prior model expresses the expectation about the spatial properties of the segmentation. In order to eliminate isolated regions in the segmentation that arise in the presence of noise, the GRF model can be designed to assign a higher probability for segmentation results that have contiguous, connected regions. Thus, estimation of the segmentation is not only dependent on the image intensity, but also constrained by the expected spatial properties imposed by the GRF model. The observed monochrome image data is denoted by y. Each individual pixel intensity is denoted by ys , where s denotes the pixel location. A segmentation eld, denoted by the N -dimensional vector x, is obtained by assigning labels to each pixel site in the image. A label xs = i; i = 1; : : : ; K , implies that the site s belongs to the i'th class among the K classes. The desired estimate of the segmentation label eld is de ned as the one that maximizes the a-posteriori pdf p(xjy) of the segmentation label eld, given the observed image y. Using the Baye's rule: p(xjy) / p(yjx)p(x) (6.18) where p(yjx) represents the conditional pdf of the data given the segmentation labels, namely the class-conditional probability density function (pdf). The term p(x) is the a-priori probability distribution that can be modeled to impose a spatial connectivity constraint on the segmentation. A spatial connectivity constraint on the segmentation eld can be imposed by modeling it as a discrete-valued GRF. Detailed discussion of GRF models can be found in [42], [44], [47]. The a-priori probability p(x) can be modeled as a Gibbs distribution: p(x) = Z1 exp[;U (x)=T ] (6.19)

6.5 Model-based Techniques

255

where the normalizing constant Z is called the partition function, T is the temperature constant, and U (x), the Gibbs potential (Gibbs energy). The authors in [47] model the mean intensity of each image region as a constant, denoted by the scalar i ; i = 1; 2; : : :; K . The conditional probability distribution is expressed as:

" X

p(yjx) / exp ;

s

(ys ; xs )2 =22

#

(6.20)

where  is the mean intensity function. Note that this is the probability distribution used in the case of estimating the segmentation on the basis of the maximum likelihood (ML) criterion. It should be observed that the MAP estimation follows a procedure that is similar to that of the K -means algorithm, namely it starts with an initial estimate of the class means and assign each pixel to one of the K classes by maximizing, then update the class means using these estimated labels, and iterate between these two steps until convergence. The MAP method presented in [47] can be extended to color images. The three color channels of the image are denoted by a 3N -dimensional vector [y1 ; y2 ; y3 ]t . A single segmentation eld x, which is consistent with all 3 channels of data and is in agreement with the prior knowledge, is desired. By assuming the conditional independence of the channels given the segmentation eld, the conditional probability in (6.20) becomes:

p(yjx) = p(y1 ; y2 ; y3 jx) = p(y1 jx)p(y2 jx)p(y3 jx)

(6.21) The image is modeled as consisting of K distinct regions, where the ith region has the uniform mean color represented by (1i ; 2i ; 3i ). The posterior probability distribution can be written as:

8 23 9 3 SATLOW = 90% of MAX (6.31) or intensity (I) < INTLOW = 10% of MAX or saturation (S) < INTHIGH = 10% of MAX where MAX is the maximum possible value. The threshold values were determined by experimental human observation. Pixels that do not fall into the achromatic category are categorized as chromatic pixels. In Fig. 6.4-6.15 images with the chromatic pixels in blue and the achromatic pixels as they are in the original image are depicted. In all the gures, the saturation and intensity values will be given on a scale of 0 to 100. In Fig. 6.5-6.7 the results when only the SATLOW threshold value changes can be seen, while in

262

6. Color Image Segmentation

Fig. 6.9-6.11 the results obtained when only the INTLOW threshold value changes are depicted. Finally, in Fig. 6.13-6.15 the results obtained when only the INTHIGH threshold value changes are summarized. It can be observed in all three scenarios, that having low threshold values classi es achromatic pixels as chromatic and having high values classi es chromatic pixels as achromatic. It may be noted that most color images do not have many achromatic pixels, as is observed in Fig. 6.16-6.19.

Fig. 6.4. Original image. Achromatic

Fig. 6.5. Saturation < 5

Fig. 6.6. Saturation < 10

Fig. 6.7. Saturation< 15

pixels have intensity< 10, intensity> 90

6.8.2 Seed Determination The region growing algorithm starts with a set of seed pixels and from these grows regions by appending to each seed pixel those neighboring pixels that satisfy a certain homogeneity criterion, which will be described later. An unsupervised algorithm is used to nd the best chromatic seed pixels in the

6.8 Application

Fig. 6.8. Original image. Achromatic

Fig. 6.9. Intensity < 5

Fig. 6.10. Intensity < 10

Fig. 6.11. Intensity < 15

pixels have saturation< 10, intensity> 90

263

image. These pixels will be the pixels that are in the center of the regions in the image. Usually the pixels in the center of a homogeneous region are the pixels that are dominant in color. The algorithm is used only to determine the seeds of the chromatic regions. The seed determination algorithm employs variance masks to the image on di erent levels. Only the hue value of the pixels are considered in this approach because it is the most signi cant feature that may be used to detect uniform color regions [74]. All the pixels in the image are rst considered as level zero seed pixels. At level one, a (3  3) non-overlapping mask is applied to the chromatic pixels in the image. The mask determines the variance, in hue, of the nine level zero pixels. If the variance is less than a certain threshold and the nine level zero pixels in the mask are chromatic pixels then the center pixel of the mask is categorized as a level one seed pixel. The rst level seeds represent (3  3) pixel regions in the image. In the second level, the non-overlaping mask is applied to the level one seed pixels in the image. Once again, the mask determines the variance in the average hue values of

264

6. Color Image Segmentation

Fig. 6.12. Original image. Achro-

Fig. 6.13. Intensity > 85

Fig. 6.14. Intensity > 90

Fig. 6.15. Intensity > 95

matic pixels have saturation< 10, intensity< 10

the nine level one seed pixels. If the variance is less than a certain threshold, the center pixel of the mask is considered as a level two seed pixel and the eight other level one seeds are disregarded as seeds. The second level seeds represent regions of (9  9) pixels. The process is repeated for successive level seed pixels until the seed pixels at the last level represent regions of a size just less than the size of the image. Typically, this is level 5 for an image that is a minimum of (35  35 ) in dimension. Fig. 6.20 shows an example of an image with level 1, 2, and 3 seeds. The algorithm is summarized in the following steps, with a representing the level: 1. All chromatic pixels in the image are set as level 0 seed pixels. Set a to 1. 2. Shift the level a mask to the next nine pixels (beginning corner of image if just increased a). 3. If the mask reaches the end of the image increase a and go to step 2.

6.8 Application

265

Fig. 6.16. Original image

Fig. 6.17. Pixel classi cation with

Fig. 6.18. Original image

Fig. 6.19. Pixel classi cation with

chromatic pixels in red and achromatic pixels in the original color

chromatic pixels in tan and achromatic pixels in the original color

4. If all the seed pixels in the mask are of level a ; 1, continue. If not, go to step 2. 5. Determine the hue variance of the nine level a ; 1 seed pixels in the (3  3) mask. The variance is computed by considering, if a = 1, the hue values of the nine pixels. Otherwise, the average hue values of the level a ; 1 seed pixels are considered. 6. If the variance is less than a threshold TV AR then the center level a ; 1 seed pixel is changed to a level a seed pixel and the other eight level a ; 1 seed pixels are no longer considered as seeds. 7. Go to step 2. Although the image is not altered in the algorithm it can be considered as a crude segmentation of the image.

266

6. Color Image Segmentation 11 00 00 11

Level 1 seed pixels 11 00 00 Level 2 seed pixels 11 00 11 00 Level 3 seed pixels 11

1 0

1 0 0 1

Fig. 6.20. Arti cial image with level 1, 2, and 3 seeds. Since hue is considered as a circular value, the variance and average values of a set of hues cannot be calculated using standard linear equations. To calculate the average and variance of a set of hue values the sum of the cosine and the sine of the nine pixels must rst be determined [75]:

C= S=

9 X

k=1

9 X

k=1

cos(Hk )

(6.32)

sin(Hk )

(6.33)

where Hk is the hue value of pixel k in the (3  3) mask. The average hue AV GHUE of the nine pixels is then de ned as: 8 arctan(S=C ) if S > 0 and C > 0 < AV GHUE = : arctan(S=C ) +  if C < 0 (6.34) arctan(S=C ) + 2 if S < 0 and C > 0 The variance V ARHUE of the nine pixels is determined as follows: V ARHUE = (;2 ln(R)) (6.35) where R is the radiance of the hue and is de ned as: p (6.36) R = 19 C 2 + S 2 If the value of V ARHUE is less than the threshold TV AR then the center level a ; 1 seed is changed to a level a seed. The value of TV AR varies depending on the level. The threshold value for each level is determined with the following formula: TV AR = V AR  a (6.37) where a and V AR are the level and an initial variance value, respectively. 1 2

6.8 Application

267

6.8.3 Region Growing The region growing algorithm starts with a set of seed pixels and from these grows regions by appending to each seed pixel those neighboring pixels that satisfy a homogeneity criterion. The general growing algorithm is the same for the chromatic and achromatic regions in the image. The algorithm is summarized in Fig. 6.21. The rst seed pixel is compared to its 8-connected neighbors: eight neighbors of the seed pixel. Any of the neighboring pixels that satisfy a homogeneity criterion are assigned to the rst region. This neighbor comparison step is repeated for every new pixel assigned to the rst region until the region is completely bounded by the edge of the image or by pixels that do not satisfy the criterion. The color of each pixel in the rst region is changed to the average color of all the pixels assigned to the region. The process is repeated for the next and each of the remaining seed pixels. Select next seed pixel

neighbors are 8 neighbors of seed pixel

compare neighbors to the seed pixel with homogeneity criterion

if any neighbor pixels compared satisfy the homogeneity condition

satisfying pixels are assigned to the region and are the new neighbors

TRUE

FALSE

Fig. 6.21. The region growing algorithm. For the chromatic regions, the algorithm starts with the set of varied level seed pixels. The seed pixels in the highest level are considered rst, followed by the next highest level seed pixels, and so on, until level zero seed pixels are considered. The homogeneity criterion used for comparing the seed pixel and the unassigned pixel is that if the value of the distance metric used to compare the unassigned pixel (i) and the seed pixel (s) is less than a threshold value Tchrom than the pixel is assigned to the region.

268

6. Color Image Segmentation

The distance measure used for comparing pixel colors is a cylindrical metric. The cylindrical metric computes the distance between the projections of the pixel points on a chromatic plane. It is de ned as follows [61]:

q

with and

dcylindrical(s; i) = (dintensity )2 + (dchromaticity)2

(6.38)

dintensity = jIs ; Ii j

(6.39)

p

dchromaticity = (Ss )2 + (Si )2 ; 2Ss Si cos 

where

(6.40)



Hs ; Hi j if jHs ; Hi j < 180  = j360 (6.41)  ; jHs ; Hi j if jHs ; Hi j > 180 The value of dchromaticity is the distance between the two-dimensional (hue

and saturation) vectors, on the chromatic plane, of the seed pixel and the pixel under consideration. Therefore, dchromaticity combines both the hue and saturation (chromatic) components of the color. The generalized Minkowski and the Canberra distance measures were also used in the experimentation. But, in [76], it was found that when comparing colors, the cylindrical distance metric is superior over the Minkowski and Canberra distance measures . With the Cylindrical metric, good results were obtained for all the types of images tested. A reason for this may be that the HSI color space is a cylindrical color space which correlates with the Cylindrical distance measure. On the contrary, the Canberra and Minkowski distance measures are not cylindrical and don't compensate for angular values. As Table 6.1 shows, the cylindrical distance measure is more discriminating, in color di erence, than the other two distance measures. Even though the second color similarity test compares two colors that are visually similar, the Cylindrical distance between the color is 3.43% of the maximum. This implies that the metric will be able to discriminate two colors that are virtually similar. A pixel is assigned to a region if the value of the metric dcylindrical is less than a threshold Tchrom. An examination of the metric equation (6.38) shows that it can be considered as a form of the popular Euclidean distance (L2 norm) metric . In the case of the achromatic pixels , the same region growing algorithm is used but with all the achromatic pixels in the image considered as level zero seed pixels. There is no seed pixels with a level one or higher. The seed determination algorithm is not used for the achromatic pixels because achromatic pixels constitute a small percentage in most color images. Since intensity is the only justi ed color attribute that can be used when comparing pixels, the homogeneity criterion used is that if the di erence in the intensity

6.8 Application

269

Table 6.1. Comparison of Chromatic Distance Measures

values between an unassigned pixel and the seed pixel is less than a threshold value Tachrom than the pixel is assigned to the seed pixel's region. That is, if jIs ; Ii j < Tachrom (6.42) then pixel i would be assigned to the region of seed pixel s.

6.8.4 Region Merging The algorithm determines dominant regions from the hue histogram. Dominant regions are classi ed as regions that have the same color as the peaks in the histogram. Once these dominant regions are determined, each remaining region is compared to them with the same color distance metric used in the region growing algorithm (6.38). The region merging algorithm is summarized in the following steps: 1. Determine peaks in the hue histogram of region grown image. 2. Classify regions, that have the same color as these peaks, as dominant regions.

270

6. Color Image Segmentation

3. Compare each of the non-dominant regions with the dominant regions using the cylindrical distance metric. 4. Assign a non-dominant region to the dominant region if the color distance is less than a threshold Tmerge . The color of all the pixels, in regions assigned to a dominant region, are changed to the color of the dominant region.

6.8.5 Results

The performance of the proposed color image segmentation scheme was tested with a number of di erent images. The results on three of these images will be presented here. The original images of 'Claire' , 'Carphone' , and 'Mother daughter' are displayed in Figs. 6.22, 6.26, and 6.30, respectively. These images are stills from multimedia sequences. More speci cally, they are video-phone type images. The unsupervised seed determination algorithm found seeds in the image that were in the center area of the regions in the image. It was found that increasing the variance threshold TV AR linearly with the level (i.e. TV AR = V AR  a) produced the best seed pixels. Fig. 6.23 shows the original 'Claire' image with the level 3 and higher seed pixels found indicated as white pixels. Here V AR was set at 0:2. In particular 1 level 4 and 43 level 3 seed pixels were found. It was found that, for all the images tested, setting V AR to 0.2 gives the best results with no undesirable seeds. Fig. 6.27 shows the original 'Carphone' image with V AR set at 0:2 and the level 2 and higher seed pixels found indicated as white pixels. Here 19 level 2 seed pixels were found. Fig. 6.31 shows the original 'Mother daughter' image with V AR set at 0:2. Here 1 level 3 (white) and 152 level 2 (black) seed pixels were found. Figs. 6.24, 6.28, and 6.32 show the three experimental images after the region growing algorithm. It was found that best results were obtained with threshold values of Tachrom = 15 and Tchrom = 15 which are, respectively, 15% and 7% of the maximum distance values for the achromatic and the chromatic distance measures. The results show that there are regions in these segmented images that require merging. Figs. 6.25, 6.29, and 6.33 show the three experimental images after the region merging step. The threshold value (Tmerge ) that gives the best merging results for a varied set of images is 20. This is, approximately, 9% of the maximum chromatic distance value. Most of the regions that were similar in color after the region merging step are now merged.

6.9 Conclusion Color image segmentation is crucial for multimedia applications. Multimedia databases utilize segmentation for the storage and indexing of images and

6.9 Conclusion

271

Fig. 6.22. Original 'Claire' image

Fig. 6.23. 'Claire' image showing

Fig. 6.24. Segmented 'Claire' image

Fig. 6.25. Segmented 'Claire' image

(before merging) with Tchrom = 0:15

seeds with V AR = 0:2

(after merging) with Tchrom = 0:15 and Tmerge = 0:2

video. Image segmentation is used for object tracking in the new MPEG-7 video compression standard. And, as shown in the results, image segmentation is used in video conferencing for compression. These are only some of the multimedia applications for image segmentation. It is usually the rst task of any image analysis process, and thus, subsequent tasks rely heavily on the quality of segmentation. A number of color image segmentation techniques have been surveyed in this chapter. They are summarized in Table 6.2. The particular color image segmentation method discussed in the last section of the chapter was shown to be very e ective. Classifying pixels as either chromatic or achromatic avoids any color comparison of pixels that are unde ned, in terms of color. The seed determination algorithm nds seed pixels that are in the center of regions which is vital when growing regions from these seeds. The cylindrical distance metric gives the best results when color pixels need to be compared. Merging regions that are similar in color is a nal means of segmenting the image into even less regions. The segmentation method proposed is interactive [77]. The best threshold values for the segmentation scheme are suggested but these values may be easily changed for di erent standards. This allows for control of the degree of segmentation.

272

6. Color Image Segmentation

Fig. 6.26. Original 'Carphone' image

Fig. 6.27. 'Carphone' image showing

Fig. 6.28. Segmented 'Carphone' im-

Fig. 6.29. Segmented 'Carphone' im-

age (before merging) with Tchrom = 0:15

seeds with V AR = 0:2

age (after merging) with Tchrom = 0:15 and Tmerge = 0:2

References 1. Marr, D., (1982): Vision. Freeman, San Francisco, CA. 2. Gonzales, R.C., Wood, R. E., (1992) Digital Image Processing. Addison-Wesley, Boston, Massachusetts. 3. Pal, N., Pal, S.K. (1993): A review on image segmentation techniques. Pattern Recognition, 26(9), 1277{1294. 4. Skarbek, W., Koschan, A. (1994): Color Image Segmentation: A Survey. Technical University of Berlin, Technical report, 94-32. 5. Fu, K.S., Mui, J.K. (1981): A survey on image segmentation, Pattern Recognition, 13, 3{16. 6. Haralick, R.M., Shapiro, L.G. (1985): Survey, image segmentation techniques. Computer Vision Graphics and Image Processing, 29, 100{132. 7. Pratt, W.K. (1991): Digital Image Processing. Wiley, New York, N.Y. 8. Wyszecki, G., Stiles, W.S. (1982): Color Science. New York, N.Y. 9. Ohlander, R., Price, K., Reddy, D.R. (1978): Picture segmentation using a recursive splitting method. Computer Graphics and Image Processing, 8, 313{333. 10. Ohta, Y., Kanade, T., Sakai, T. (1980): Color information for region segmentation, Computer Graphics and Image Processing, 13, 222{241.

References

273

Table 6.2. Color Image Segmentation Techniques

Color Image Segmentation Techniques Pixel-based decide based on the color of pixels no spatial constraints simplicity of algorithms

Edge-based focus on discontinuity of regions sensitivity to noise

Summary

Histogram Thresholding Clustering

Techniques extended from monochrome techniques Vector Space Approaches

Region-based

Region Growing

focus on the continuity of regions consider both color information and spatial constraints

Split and Merge

Model-based

Physics-based Hybrid

color regions are determined by thresholding peak(s) in the histogram(s) simple to implement no spatial considerations many clustering algorithms K-means & fuzzy K-means pixels in image are assigned to the cluster that is similar in color adjacent clusters frequently overlap in color space, causing incorrect pixel assignment also su ers from no spatial constraints monochrome techniques applied to each color component independently and then results are combined many rst & second derivative operators can be used Sobel, Laplacian, Mexican Hat operators are most popular views color image as a vector space Vector Gradient, Entropy, Second Derivative operators have been proposed sensitive to noise process of growing neighboring pixels or a collection of pixels of similar color properties into larger regions further merging of regions is usually needed iteratively splitting the image into smaller and smaller regions and merging adjacent regions that satisfy a color homogeneity criterion quadtree data structure is most common used data structure in algorithms regions modeled as random elds most techniques use the spatial interaction models like MRF or Gibbs Random Field maximum a posteriori approach is most common high complexity allows the segmentation of color images based on physical models of image formation basic methods are similar to traditional methods above most employ the Dichromatic Re ection Model many assumptions made best results for images taken in controlled environment combine the advantages of di erent techniques most common techniques of color image segmentation today

274

6. Color Image Segmentation

Fig. 6.30.

Daughter' image

Fig. 6.32.

Original 'Mother-

Segmented 'MotherDaughter' image (before merging) with Tchrom = 0:15

Fig. 6.31. 'Mother-Daughter' image showing seeds with V AR = 0:2

Fig. 6.33.

Segmented 'MotherDaughter' image (after merging) with Tchrom = 0:15 and Tmerge = 0:2

11. Holla, K. (1982): Opponent colors as a 2-dimensional feature within a model of the rst stages of the human visual system. Proceedings of the 6th Int. Conf. on Pattern Recognition, Munich, Germany, 161{163. 12. von Stein, H.D., Reimers, W. (1983): Segmentation of color pictures with the aid of color information and spatial neighborhoods. Signal Processing II: Theories and Applications, North-Holland, Amsterdam, Netherlands, 271{273. 13. Tominaga S. (1986): Color image segmentation using three perceptual attributes. Proceedings of the Computer Vision and pattern Recognition Conference, CVPRR'86, 628{630. 14. Gong, Y. (1998): Intelligent Image Databases: Towards Advanced Image Retrieval. Kluwer Academic Publishers, Boston, Massachusetts. 15. Hartigan, J.A. (1975): Clustering Algorithms. John Wiley and Sons, USA. 16. Tou, J., Gonzalez, R.C. (1974): Pattern Recognition Principles. Addison-Wesley Publishing, Boston, Massachusetts. 17. Tominaga, S. (1990): A color classi cation method for color images using a uniform color space. Proceedings of the 10th Int. Conf. on Pattern Recognition, 1, 803{807. 18. Celenk, M. (1988): A recursive clustering technique for color picture segmentation. Proceedings of Int. Conf. on Computer Vision and Pattern Recognition, CVPR'88, 437{444.

References

275

19. Celenk, M. (1990): A color clustering technique for image segmentation. Computer Vision, Graphics, and Image Processing, 52, 145{170. 20. McLaren, K. (1976): The development of the CIE (L*,a*,b*) uniform color space. J. Soc. Dyers Colour, 338{341. 21. Gevers, T., Groen, F.C.A. (1990): Segmentation of Color Images. Technical report, Faculty of Mathematics and Computer Science, University of Amsterdam. 22. Weeks, A.R., Hague, G.E. (1997): Color segmentation in the HSI color space using the K-means algorithm. Proceedings of the SPIE, 3026, 143{154. 23. Heisele, B., Krebel, U., Ritter, W. (1997): Tracking non-rigid objects based on color cluster ow. Proceedings, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 257{260. 24. Zadeh, L.A. (1965): Fuzzy sets. Information Control, 8, 338{353. 25. Bezdek, J.C. (1973): Fuzzy Mathematics in Pattern Classi cation. Ph.D. Thesis, Cornell University, Ithaca, N.Y. 26. Bezdek, J.C. (1981): Pattern Recognition with Fuzzy Objective Function Algorihms. Plenum Press, New York, N.Y. 27. Huntsberger, T.L., Jacobs, C.L., Cannon, R.L. (1985): Iterative fuzzy image segmentation. Pattern Recognition, 18(2), 131-138. 28. Trivedi, M., Bezdek, J.C. (1986): Low-level segmentation of aerial images with fuzzy clustering. IEEE Transactions on Systems, Man, and Cybernetics, 16(4), 589-598. 29. Lim, Y.W., Lee, S.U. (1990): On the color image segmentation algorithm based on the thresholding and the fuzzy c-Means techniques. Pattern Recognition, 23(9), 1235{1252. 30. Goshtasby, A., O'Neill, W. (1994): Curve tting by a sum of Gaussians. CVGIP: Graphical Models and Image Processing, 56(4), 281{288. 31. Witkin, A.P. (1984): Scale space ltering: A new approach to multi-scale description. Proceedings of the IEEE Int. Conf. on Acoustics, Speech and Signal Processing, ICASSP'84(3), 39A1.1{39A1.4. 32. Koschan, A. (1995): A comparitive study on color edge detection. Proceedings of the 26nd Asian Conference on Computer Vision, ACCV'95(III), 574{578. 33. Ikonomakis, N., Plataniotis, K.N., Venetsanopoulos, A.N. (1998): Grey-scale and image segmentation via region growing and region merging. Canadian Journal of Electrical and Computer Engineering, 23(1), 43-48. 34. Gauch, J., Hsia, C. (1992): A comparison of three color image segmentation algorithms in four color spaces. Visual Communications and Image Processing, 1818, 1168{1181. 35. Tremeau, A., Borel, N. (1997): A region growing and merging algorithm to color segmentation. Pattern Recognition, 30(7), 1191{1203. 36. Vlachos, T., Constantinides, A.G. (1992): A graph-theoretic approach to color image segmentation and contour classi cation. The 4th Int. Conf. on Image Processing and its Applications, IEE 354, 298{302. 37. Horowitz, S.L., Pavlidis, T. (1974): Picture segmentation by a directed splitand-merge procedure. Proceedings of the 2nd International Joint Conf. on Pattern Recognition, 424{433. 38. Samet, H. (1984): The quadtree and related hierarchical data structures. Computer Surveys, 16(2), 187{230. 39. Gevers, T., Kajcovski, V.K. (1994): Image segmentation by directed region subdivision. Proceedings of the 12th IAPR Int. Conf. on Pattern Recognition, 1, 342{346. 40. Lee, D.L., Schachter, B.J. (1980): Two algorithms for constructing a delaunay triangulation. International Journal of Computer and Information Sciences, 9(3), 219{242.

276

6. Color Image Segmentation

41. Abend, K., Harley, T., Kanal, L.N. (1965): Classi cation of binary random patterns. IEEE Transactions on Information Theory, IT-11, 538{544. 42. Besag, J. (1986): On the statistical analysis of dirty pictures. Journal Royal Statistical Society B, 48, 259{302. 43. Cross, G.R., Jain, A.K. (1983): Markov random eld texture models. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-5, 25{39. 44. Geman, S., Geman, D. (1984): Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-6, 721{741. 45. Cohen, F.S., Cooper, D.B. (1983): Real time textured image segmentation based on non-causal Markovian random eld models. Proceedings of the SPIE, Conference on Intelligent Robots, Cambridge, MA. 46. Cohen, F.S., Cooper, D.B. (1987): Simple, parallel, hierarchical, and relaxation algorithms for segmenting non-causal Markovian random eld models. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-9(2), 195{ 219. 47. Derin, H., Elliott, H. (1987): Modeling and segmentation of noisy and textured images using Gibbs random elds. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-9(1), 39{55. 48. Lakshmanan, S., Derin, H. (1989): Simultaneous parameter estimation and segmentation of Gibbs random eld using simulated annealing. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-11(8), 799{813. 49. Panjwani, D.K., Healey, G. (1995): Markov random eld models for unsupervised segmentation of textured color images. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-17(10), 939{954. 50. Langan, D.A., Modestino, J.W., Zhang, J. (1998): Cluster validation for unsupervised stochastic model-based image segmentation. IEEE Transactions of Image Processing, 7(2), 180{195. 51. Tekalp, A.M. (1995): Digital Video Processing, Prentice Hall, New Jersey. 52. Liu, J., Yang, Y.-H. (1994): Multiresolution color image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI 16(7), 689{ 700. 53. Pappas, T.N. (1992): An adaptive clustering algorithm for image segmentation. IEEE Transactions on Signal Processing, 40(4), 901{914. 54. Chang, M.M., Sezan, M.I., Tekalp A.M. (1994): Adaptive Bayesian segmentation of color images. Journal of Electronic Imaging, 3(4), 404{414. 55. Baraldi, A., Blonda, P., Parmiggiani, F., Satalino, G. (1998): Contextual clustering for image segmentation. Technical report, TR-98-009, International Computer Science Institute, Berkeley, California. 56. Brill, M.H. (1991): Photometric models in multispectral machine vision. in Proceedings, Human Vision, Visual Processing, and Digital Display II, SPIE 1453, 369{380. 57. Healey, G.E. (1992): Segmenting images using normalized color. IEEE Transactions on Systems, Man, and Cybernetics, 22, 64{73. 58. Klinker, G.J., Shafer, S.A., Kanada, T. (1988): Image segmentation and re ection analysis through color. in Proceedings, IUW'88, II, 838{853. 59. Klinker, G.J., Shafer, S.A., Kanada, T. (1990): A physical approach to color image understanding. International Journal of Computer Vision, 4(1), 7{38. 60. Shafer, S.A. (1985): Using color to separate re ection components. Color Research & Applications, 10(4), 210{218. 61. Tseng, D.-C., Chang, C.H. (1992): Color segmentation using perceptual attributes. Proceedings of the 11th International Conference on Pattern Recognition, III, 228{231.

References

277

62. Zugaj, D., Lattuati, V. (1998): A new approach of color images segmentation based on fusing region and edge segmentations outputs. Pattern Recognition, 31(2), 105{113. 63. Moghaddamzadeh, A., Bourbakis, N. (1997): A fuzzy region growing approach for segmentation of color images. Pattern Recognition, 30(6), 867{881. 64. Ito, N., Kamekura, R., Shimazu, Y., Yokoyama, T. (1996): The combination of edge detection and region extraction in non-parametric color image segmentation. Information Sciences, 92, 277{294. 65. Saber, E., Tekalp, A.M., Bozdagi, G. (1997): Fusion of color and edge information for improved segmentation and edge linking. Image and Vision Computing, 15, 769{780. 66. Xerox Color Encoding Standards: (1989). Technical Report, Xerox Systems Institute, Sunnyvale, CA. 67. Beucher, S. and Meyer, F. (1993): The morphological approach to segmentation: The watershed tranformation. Mathematical Morphology in Image Processing, 443-481. 68. Duda, R. O. and Hart, P. E (1973): Pattern Classi cation and Scene Analysis. Wiley, New York, N.Y. 69. Shafarenko, L., Petrou, M., and Kittler, J. (1998): Histogram-based segmentation in a perceptually uniform color space. IEEE Transactions on Image Processing, 7(9), 1354-1358. 70. Di Zenzo, S. (1986): A note on the gradient of a multi-image. Computer Vision Graphics, Image Processing, 33, 116-126. 71. Park, S. H., Yun, I. D., and Lee, S. U. (1998): Color image segmentation based on 3-d clustering: Morphological approach. Pattern Recognition, 31(8), 10611076. 72. Levine, M.D. (1985): Vision in Man and Machine. McGraw-Hill, New York, N.Y. 73. Ikonomakis, N., Plataniotis, K.N., Venetsanopoulos, A.N. (1998): Color image segmentation for multimedia applications. Advances in Intelligent Systems: Concepts, Tools and applications, Tzafestas, S.G. (ed.), 287-298, Kluwer, Dordrecht, Netherlands. 74. Gong, Y., Sakauchi, M. (1995): Detection of regions matching speci ed chromatic features. Computer Vision and Image Understanding, 61(2): 263-264. 75. Fisher, N.I. (1993): Statistical Analysis of Circular Data. Cambridge Press, Cambridge, U.K. 76. Ikonomakis, N., Plataniotis, K.N., Venetsanopoulos, A.N. (1999): A regionbased color image segmentation scheme. SPIE Visual Communication and Image Processing, 3653, : 1202-1209. 77. Ikonomakis, N., Plataniotis, K.N., Venetsanopoulos, A.N. (1999): User interaction in region-based color image segmentation. Visual Information Systems. Huijmans, D.P., Smeulders, A.W.M. (eds.), 99-106, Springer, Berlin, Germany.

278

6. Color Image Segmentation

7. Color Image Compression

7.1 Introduction Over the past few years the world has witnessed a growing demand for visual based information and communications applications. With the arrival of the 'Information Highway' such applications as tele-conferencing, digital libraries, video-on-demand, cable shopping and multimedia asset management systems are now common place. Hand-to-hand with the introduction of these systems and the simultaneous improvement in the quality of these applications were the improved hardware and techniques for digital signal processing. The improved hardware which o ered greater capabilities in terms of computational power, combined with the sophisticated signal processing techniques that allowed for a much greater exibility in processing and manipulation, gave rise to new information applications, and advances and better quality in existing applications. As the demand for new applications and higher quality for existing applications continues to rise, the transmission and storage of the visual information becomes a more critical issue [1], [2]. The reason for this is that higher image or video quality requires larger volume of information. However, transmission media have a nite and limited bandwidth. To illustrate the problem, consider a typical (512512) monochrome (8-bit) image . This image has 2,097,152 bits. By using a 64 Kbit/s communication channel, it would take about 33 seconds to transmit the image. Whereas this might be acceptable for a one time transmission of a single image, it would de nitely not be acceptable for tele-conference applications, where some form of continuous motion is required. The large volume of information contained in each image also creates storage diculties. To store an uncompressed digital version of a 90 minute black and white movie, at 30 frames/sec, with each frame having (5125128) bits, would require 3.397386e+11 bits, over 42 GBytes. Obviously, without any form of compression the amount of storage required for a modest size digital library would be staggeringly high. Also, higher image quality, which usually implies use of color and higher image resolution, would be much more demanding in terms of transmission time and storage.

280

7. Color Image Compression

To appreciate the need for compression and coding of visual signals such as color images and video frames, signal characteristics and their storage needs are summarized in Table 7.1.

Table 7.1. Storage requirements Visual input VGA image XVGA image NTSC frame PAL frame HDTV frame

Pixels/frame

640480 1024768 480483 576576 1280720

Bits/pixel 8 24

16 16 12

Uncompressed size 3.74Mb 18.87Mb 3.71Mb 5.31Mb 11.05Mb

with a 4 : 2 : 2 color sub-sampling scheme employed in NTSC and PAL and a 4 : 1 : 1 color sub-sampling based HTDV signal. To address the problems of transmission and storage, di erent image compression algorithms can be employed to: (i) eliminate any information redundancies in the image, and (ii) reduce the amount of information contained in the image. Whereas elimination of information redundancy does not hamper at all the quality of the image, eliminating necessary information does come at the cost of image quality degradation. Images and video signals are amenable to compression due to these factors: 1. Spatial redundancy. Within a single image or video frame there exists signi cant correlation among neighboring pixels. Redundancy in an image also includes repeated occurances of base shapes, colors and patterns within the image. 2. Spectral redundancy. For visual data, such as color images or multispectral images acquired from multiple sensors, there exists signi cant correlation amongst samples from the di erent spectral channels. 3. Temporal redundancy. For visual data, such as video streams, there is signi cant correlation amongst samples in di erent time instances. The most obvious form is redundancy from repeated objects in consecutive frames of a video stream. 4. Observable redundancy. There is considerable information in the visual data that is irrelevant from a perceptual point of view. By taking advantage of the perceptual masking properties of the human visual system and by expressing its insensitivity to various types of distortion as a function of image color, texture and motion, compression schemes can develop a pro le of the signal levels that provide just noticeable distortion (JND) in the image and video signals. Thus, it is possible based on this pro le to create coding schemes that hide the reduction e ects under the JND pro le and thereby make the distortion become perceptually invisible.

7.1 Introduction

281

5. Meta data redundancy. Some visual data, such as synthetic images, tend to have high-level features that are redundant across space and time, in other words data that are of a fractal nature. Depending on the kind of information removed during the compression processes the following forms of compression can be de ned: 1. Lossless compression. Lossless image compression allows the exact reconstruction of the original image during the decoding (de-compression) process. The problem is that the best lossless image compression schemes are limited to modest compression gains. Lossless compression is mainly of interest in applications where image quality is more important than the compression ratio and visual data must remain unchanged over many consecutive cycles of compression and decompression. 2. Lossy compression. Lossy compression algorithms allow only approximate reconstruction of the original image from the compressed data. The lower the quality of the reconstructed image needs to be, the more the original image can be compressed. Examples of lossy compression schemes are the JPEG lossy compression mode used to compress still color images and the MPEG compression standards for video sequences [3{7]. All lossy compression schemes produce artifacts. Although in some applications the degradation may not be perceivable, it may be annoying after several cycles of compression and decompression. Traditionally, image compression techniques were able to achieve compression ratios of about 10:1 without causing a noticeable degradation in the quality of the image. However, any attempt to further reduce the bit rate would invariably result in noticeable distortions in the reconstructed image, usually in the form of block artifacts, color shifts and false contours. 3. Perceptually lossless compression. Perceptually lossless image compression deals with lossy compression schemes in which degradation in the image quality is not visible to human observers [8], [9]. Perceptually motivated compression schemes make use of the properties of the human visual system to improve further the compression ratio. In this type of coding, perceptually invisible distortions of the original image are accepted in order to attain very high compression ratios. Since not all signal frequencies in an image have the same importance, an appropriate frequency weighting scheme can be introduced during the encoding process. After the perceptual weighting has been performed, an optimized encoder can be used to minimize an objective distortion measure, such as the mean square error [10]. There are many coding techniques applicable to still, monochrome or color, images and video frames. These techniques can be split into three distinct groups according to the way in which they deal with the source data. In particular, they can be de ned as:

282

7. Color Image Compression

1. Waveform based coding techniques. These techniques, also called rst generation techniques, refer to methods that assume a certain model on the statistics of pixels in the image. The primitives of these techniques are either individual pixels or a block of pixels or a transformed version of their values. These primitives constitute the message to be encoded. There are lossless and lossy waveform based techniques. Lossless techniques include variable length coding techniques, such as arithmetic coding and Lempel Ziv coding, pattern matching and statistical based techniques, such as Fano or Hu man coding. Lossy wave form based techniques include time domain techniques, such as pulse code modulation (PCM) and vector quantization (VQ) [11], where as frequency domain techniques include methodologies based on transforms, such as the Fourier transform, discrete cosine transform (DCT) [12], [13] and the Karhune Loeve (KL) transform as well as techniques based on wavelets and subband analysis/synthesis systems [14], [15], [16]. 2. Second generation coding techniques. Second generation techniques, model or object based, are techniques attempting to describe an image in terms of visually meaningful primitives, such as distinct color areas, strong edges, contours and texture. Emerging multimedia applications, such a multimedia databases, and video-on-demand will need access to visual data on an object-to-object basis. Visual components, such as color and shape, along with motion for video applications, can be used to support such requirements [17], [18], [19]. In this group, fractal-based coding techniques can also be included. These techniques are based on the fractal theory, in which an image is reconstructed by means of an ane transformation of its self-similar regions. Fractal based techniques produce outstanding results in terms of compression in images, retaining a high degree of self similarity, e.g. synthetic images [20]. Table 7.2 and 7.3 give a perspective of the available techniques and their classi cation. Before reviewing some of the waveform based and second generation techniques in greater detail, the basis on which these techniques are evaluated and compared will be given, along with a few of the important terms that are used throughout the chapter.

7.2 Image Compression Comparison Terminology Compression methodologies are compared on the basis of the following dimensions of performance [21], [22]: 1. Image quality. Image quality refers to the subjective quality of the reconstructed image relative to the original, uncompressed image. This

7.2 Image Compression Comparison Terminology

283

Table 7.2. A taxonomy of image compression methodologies: First Generation Waveform based Techniques (1st Generation)

Entropy Coding

Lossless Interpixel Redundancy

Spatial Domain

Lossy Transform Domain

Hybrid Techniques

Huffman Coding

DPCM

DPCM

Block Transform

BTC/DPCM BTC/VQ

Arithmetic Coding

Runlength Coding

Vector Quantization

Coding BTC

SBC/DPCM SBC/VQ

Scalar Quantization

(DFT, DCT DST, KL)

Entropy Coded

Tree Structured Quantization

Subband Coding

Lempel-Ziv Coding

Predictive Vector Quantization

Version of Above

(SBC)

Finite-State Vector Quantization Quantization Entropy Coded Version of Above

Table 7.3. A taxonomy of image compression methodologies: Second Generation Second Generation Techniques Object Segmentation Coding Texture Modeling /Segmentation - Contour Coding - Fractal Coding Morphological Techniques Model Based Coding Techniques

284

7. Color Image Compression

subjective assessment refers to an actual image quality rating done by human observers. The rating is done on a ve-point scale called the mean opinion score (MOS) with the ve points ranging from bad to excellent. Objective distortion measures, such as the mean square error (MSE), the relative mean square error (RMSE), the mean absolute error (MAE) and signal-to-noise ratio (SNR) can quantify the amount of information loss an image has su ered, but in many cases they do not provide an accurate or even correct measure of the actual visual quality degradation. 2. Coding eciency. Two of the more popular measures of an algorithm's eciency are the compression ratio and the bit rate. Compression ratio is simply the ratio of the number of bits needed to encode the uncompressed image to the number of bits needed to encode the compressed image. An equivalent eciency measure is the bit rate which gives the average number of bits required to encode one image element (pixel). In the context of image compression, the higher the compression ratio, or the lower the bit rate the more ecient is the algorithm. However, the ef ciency measure might be misleading if not considered in unison with the signal quality measure since some image compression algorithms might compress the image by reducing the resolution, both temporal and spatial, or by reducing the number of quantization levels [9]. In other words, to evaluate the eciency of two algorithms, their respective image quality must be the same. To that end, a measure that incorporates those two dimensions, eciency and image quality, is the rate distortion function. The rate distortion function describes the minimum bit rate required for a given average distortion. 3. Complexity. The complexity of an algorithm refers to the computational e ort required to carry out the compression technique. The computational complexity is often given in million instruction per second (MIPS), oating point operation per second (FLOPS), and cost. This performance dimension is important since an algorithm might be preferable to another one which is marginally more ecient but is much more computationally complex. 4. Communication delay. A performance dimension of lesser importance, mainly because it is not an important consideration in some applications, is the communication delay. This performance indicator refers to how much delay is allowed before the compressed image is transmitted. In cases where a large delay can be tolerated, e.g. facsimile, more time consuming algorithms can be allowed. On the other hand, in two-way communication applications a long delay is de nitely not allowed.

7.3 Image Representation for Compression Applications

285

7.3 Image Representation for Compression Applications When choosing a speci c compression method, one should consider the data representation format. Images for compression may be in di erent formats which are de ned by:  color space used  the number bits per pixel  spatial resolution  temporal resolution for video signals Initially the image compression techniques were de ned in the context of monochrome images. However, most of today's image applications are based on color representation. It was therefore necessary to extend these image compression techniques so that they can accommodate color images. The extension to color image compression is straight-forward but requires an understanding of the various color models used. Linear RGB is the basic and most widely used color model for color display on monitors. It was mentioned in Chap. 1 that in RGB, a color is represented as a composition of the three primary color spectral components of red, green, and blue. A color image can then be represented as three 8-bit planes corresponding to each of the primary colors, for a total of 24 bits/pixel [23]. The value in each of the color planes can then be considered as a gray scale value which would represent the intensity of that particular color at the current pixel. This color representation can then be very easily compressed using the regular spatial domain image compression methods, such as entropy coding. This is simply done by separately compressing each of the three color planes. However, the RGB space is not an ecient representation for compression because there is a signi cant correlation between the three color components since the image energy is distributed almost equally among them both spatially and spectrally. A solution is to apply an orthogonal decomposition of the color signals in order to compact the image data into fewer channels. The commonly used YUV, YIQ and YCb Cr color spaces are examples of color spaces based on this principles. Theoretically, these color coordinate systems can provide nearly as much energy compaction as an optimal decomposition of the Karhunen-Loeve transform. The resulting luminance-chrominance representation exhibits unequal energy distribution favoring the luminance component in which the vast majority of ne detail high frequencies can be found [24]. Since the sensitivity of the human visual system is relatively high for chrominance errors, the chrominance channels need only a fraction of the luminance resolution in order to guarantee sharpness on the perceived image. Therefore, the chrominance components are mostly sub-sampled with respect to the chrominance component when a luminance-chrominance representation, such as the YCb Cr is used. There are three basic sub-sampling formats

286

7. Color Image Compression

for processing color images. In the 4:4:4 format all components have identical vertical and horizontal resolutions. In the 4:2:2 format, also known as CCIR 601 format, the chrominance components have the same vertical resolution as the luminance component, but the horizontal resolution is halved. The most common format is the 4:2:0 used in conjunction with the YCb Cr color space in the MPEG-1 and MPEG-2 standards. Each MPEG macroblock comprises of four 88 luminance blocks and one 88 blocks of Cb and Cr color components. A 24 bits/pixel representation is also typical for luminancechrominance representation of digital video frames. However, 10-bit representation of the components is used in some high- delity applications.

7.4 Lossless Waveform-based Image Compression Techniques Waveform-based image compression techniques can reduce the bit rate by eciently coding the image. The coding is done without considering the global importance of the pixel, segment, or block being coded. Conventional waveform-based techniques can be identi ed either as lossless techniques or lossy techniques. Both these classes will be described in detail. There are two main ways in which the bit rate can be reduced without losing any information. The rst method is to simply use ecient codes to code the image. The second method is to try and reduce some of the redundant information that exist in the image.

7.4.1 Entropy Coding In entropy coding bit rate reduction is based solely on codeword assignment. Entropy is the amount of information based on the probabilistic occurrence of picture elements. Mathematically, entropy is de ned as:

X

H (X ) = ; P (xi ) log P (xi ) (7.1) where P (xi ) is the probability that the monochrome value (xi ) will occur, and H (X ) is the entropy of the source measured in bits [25]. These probabilities

can be found from the image's histogram. In this sense, the entropy describes what is the average information or uncertainty of every pixel. Since it is very unlikely that each of the possible gray-levels will occur with equal probability, variable length codewords can be assigned to describe speci c pixel values with the more probable pixel values being assigned shorter codewords, thus achieving shorter average codeword length. This coding (compression) principle is employed by the following coding methods: 1. Hu man coding. This is one of the most straightforward and practical encoding methods. Hu man coding assigns xed codewords to the source

7.4 Lossless Waveform-based Image Compression Techniques

287

words (in this case the source words being the pixel values). The least probable source words are assigned the longest codewords whereas the most probable are assigned the shortest codewords. This method requires knowledge of the image's histogram. With this codeword assignment rule, Hu man coding approaches the source's entropy. The main advantage of this method is the ease of implementation. A table is simply used to assign source words their corresponding codewords. The main disadvantages are that the size of the table is equal to the number of source words, and the table with all the codeword assignments also has to be made known to the receiver. 2. Arithmetic coding. Arithmetic coding can approach the entropy of the image more closely than can be done with Hu man coding. Unlike Hu man coding, there is no one-to-one correspondence between the source words and the codewords [26]. In arithmetic coding, the codeword de nes an interval between 0 and 1. The speci c interval is based on the probability of occurrence of the source word. The main idea of arithmetic coding is that blocks of source symbols can be coded together by simply representing them with smaller and more re ned intervals (as the block of source symbol increases, more bits would be required to represent the corresponding interval) [26]. Compared to Hu man coding, the main advantage of this method is that less bits are required to encode the image since it is more economical to encode blocks of source symbols than individual source symbols. Also, no codeword table is required in this method, and thus arithmetic coding does not have the problem of memory overhead. However, the computational complexity required in arithmetic coding is considerably higher than in Hu man coding. 3. Lempel-Ziv coding. Lempel-Ziv coding is a universal coding scheme. In other words a coding scheme which approaches entropy without having prior knowledge of the probability of occurrence of the source symbols. Unlike the two entropy methods mentioned above, the Lempel-Ziv coding method assigns blocks of source symbols of varying length to xed length codewords. In this coding method the source input is parsed into strings that have not been encountered thus far. For example, if the strings `0', `1', and `10' are the only strings that have been encountered so far, then the strings `11', `100', `101' are examples of strings that are yet to be encountered and recorded. When a new string is encountered, it is recorded by indexing its pre x (which will correspond to a string that has already appeared) and its last bit. The main advantage of this coding method is that absolutely no prior knowledge of the source symbol probabilities is needed. The main disadvantage is that since all codewords are of xed length, short input source sequences, such as low resolution images might be encoded into longer output sequences. However, this method does approach entropy for long input sequences.

288

7. Color Image Compression

It is important to note that that entropy coding can always be used to supplement other more sophisticated and ecient algorithms, by assigning variable codeword length to the output of those algorithms. It should also be emphasized that entropy coding utilizes only the probability of occurrence of the di erent pixel value but not the correlation between the values of neighboring pixels. Entropy coding can therefore reduce the bit rate by usually no more than 20-30% resulting in a compression ratio of up to 1:4 : 1.

7.4.2 Lossless Compression Using Spatial Redundancy More signi cant bit rate reduction can be realized if the interpixel redundancy that exists in the image is reduced. Since images are generally characterized by large regions of constant or near constant pixel values, there is considerable spatial redundancy that can be removed. The following is a description of some of the common methods that can be used to remove this redundancy without losing any information. 1. Predictive coding. One way of reducing the spatial redundancy is to use neighboring pixel values as an estimate to the current pixel [25], [28]. Therefore, instead of encoding the actual value of the current pixel, the di erence between the predicted value, predicted from pixel that were already traversed, and the actual pixel value is encoded. The coding method is called di erential pulse code modulation (DPCM). Since that di erence would generally be small, the dynamic range of the error will be much smaller than the dynamic range of the pixel values, and therefore and entropy coding method can be used very e ectively to encode the error. The overall coding procedure can be summarized as follows:  Find a linear estimate of the current pixel from its neighbors according to: XX f^(m; n) = a(i; j )f (m ; i; n ; j ) (7.2) i

j

In many cases the estimate is rounded o to the closest integer so that there will not be a need to deal with decimals. In addition, the only pixels allowed to be used in the estimation are those that occur prior to the current one since these are the pixels that will be available during the image reconstruction.  Find the error between the actual value of the current pixel and the corresponding estimate according to: e(m; n) = f (m; n) ; f^(m; n) (7.3) Encode the error value using one of the several entropy coding techniques described before.  At the decoder end, an estimate of the current pixel is again derived using the same prediction model, and the decoded error that was transmitted is added to the estimate to obtain the original value of the pixel according to:

7.4 Lossless Waveform-based Image Compression Techniques

f (m; n) = f^(m; n) + e(m; n)

289

(7.4) This compression scheme can achieve much better compression ratios than those obtained by only using entropy coding schemes. The compression ratios tend to vary from 2:1 to 3:1 [27]. The variation in compression ratio is due to several factors. One of the main factors is the particular parameters that are chosen to estimate the pixels. Indeed, better prediction parameters will result in closer estimates and by extension will reduce the bit rate. Moreover, if adaptive linear prediction parameters are chosen by splitting the image into smaller blocks and computing the prediction parameters for each block, the compression ratio can be further improved [25]. Another way of improving the compression ratio is by scanning the image using a di erent pattern, such as Peano scan or Worm path patterns [28] rather than using the regular raster scan pattern from left to right and top to bottom. By traversing the image in a di erent order, estimates of the current pixel can also be derived from pixels which are below it and thus further reduce the interpixel redundancy. 2. Runlength coding. This coding algorithm is intended mainly for the compact compression of bi-level images and is widely used for fax transmissions. This scheme centers on the fact that there are only two types of pixels, namely black and white. Also, since high correlation between neighboring pixels exists it would be enough to simply indicate where and how long a black or white run of pixels is in order to perfectly reconstruct the image from that information. The runlength coding method most often used is based on the relative address coding (RAC) approach [26]. This speci c method codes the runs of black or white pixels on the current line relative to the black and white runs of the previous line. This way both the correlation between vertical neighbors and horizontal neighbors is exploited to reduce the interpixel redundancy. The coding algorithm is as follows:  Two coding modes are de ned. The rst one is the horizontal mode which codes the black and white runs without referring to the previous line, and the vertical mode where the previous line is taken into account in order to take advantage of the vertical correlation of the image [29].  Horizontal mode uses a Hu man coding method to assign the various black and white runlength variable length codewords (based on the probability of occurrence of a speci c runlength in a typical image). In vertical mode the information coded just indicates the beginning and ending position of the current runlength relative to the corresponding runlength in the previous line.  The rst line is always coded using horizontal mode. Furthermore, one in every few lines also has to be coded using horizontal mode to reduce the susceptibility of this scheme to error. All other lines are coded using the vertical mode [29].

290

7. Color Image Compression

Compression ratios of 9:1 to 11:1 are achieved using this technique on bi-level images [29]. However, gray scale and color images are usually ill-suited for this type of compression method. This is because to code a monochrome image using runlength coding requires bit-plane decomposition on the image, namely breaking down the m-bit gray scale image to m 1-bit planes [26]. While it is found that high correlation exist between the pixels of the most signi cant bit-planes, there is signi cantly less correlation in the less signi cant bit-planes and thus the overall compression ratios achieved are not as high as those achieved using predictive coding [30].

7.5 Lossy Waveform-based Image Compression Techniques Lossy compression techniques allows for some form of information loss and possibly some degradation in the quality of the image. As was mentioned above, the best that can be achieved in terms of compression when perfect reconstruction is required is about a 2:1 to a 3:1 compression ratio. However, when the perfect reconstruction constraint is dropped, much higher compression ratios can be achieved. The tradeo is of course in the quality of the image and the complexity of the algorithm. Lossy domain can be done by using either spatial domain, or transform domain methods. The following section will consider both.

7.5.1 Spatial Domain Methodologies Lossy spatial domain coding methods, much like their lossless counterparts, exploit the spatial redundancy in an image. However, in lossy coding, the accuracy of representing the residual information, that is the information that remains once the basic redundancy is removed, is compromised in order to obtain higher compression ratios. The compressed image cannot then be perfectly reconstructed due to this inaccurate lossy representation. Some of the common spatial domain coding methods are described below. 1. Predictive coding. Lossy predictive coding essentially follows the same steps as the lossless predictive coding with the exception that a quantizer is used to quantize the error between the actual and predicted values of the current pixel [26]. When a quantizer is used, there are only several discrete values that the encoded error value can take and thus there is an improvement in the compression ratio. However, use of a quantizer results in quantization error, and the image cannot be perfectly reconstructed since the actual error values are no longer available. The performance of this coding method in terms of coding eciency and reconstructed image quality depends on the:

7.5 Lossy Waveform-based Image Compression Techniques

291

 Prediction model. The proper choice of prediction parameters, either

adaptive or global, will minimize the prediction error and improve the compression ratio. Also, the number of previous pixels that are used to predict the value of the current pixel will also a ect the e ectiveness of the prediction. The scanning pattern, raster scan or Peano scan, also a ects the performance of the this coding method.  Quantizer. Choice of the number of quantizer levels and the actual levels used. Given a speci ed number of quantizer levels, the problem is reduced to nding the decision levels, and reconstruction levels, which are the unique values into which the decision level intervals are mapped in a many-to-one fashion, that will minimize the given error criterion, objective or subjective. An example of a quantizer that minimizes the mean-square error is the Lloyd-Max quantizer. 2. Vector quantization. This compression technique operates on blocks rather than on individual pixels. It can decompress visual information in real time using software, without the use of special hardware and does so with reasonable quality. The main idea of vector quantization (VQ) is that a block of k image pixels, which henceforth will be referred to as a block of dimension k, can be represented by a k dimensional template chosen from a table of pre-de ned templates [11]. The template to represent a particular k-dimensional block is chosen on the basis of minimizing some error criterion, such as the template closest to the block in some sense. A code representing the chosen template is then transmitted. The encoder and the decoder use the same codebook. To optimize performance, a training method involving the use of test sequences is utilized to generate the codebook in an automatic manner. At the receiver end, the decoder can use the index to fetch the corresponding codeword and use it as the decompressed output. The decompression is not as computationally intensive as that employed in transform based schemes, such as the JPEG [31]. The coding eciency, typically up to a 10:1 compression ratio, and image quality will depend on the following:  Template table size. Large tables and large number of templates result in smaller quantization errors. This will translate to a higher and better reconstructed image quality. However, large template tables require longer codes to represent the selected template and so the bit rate increases.  Choice of templates. The main problem with the VQ method is that the speci c templates chosen to represent the blocks are usually image dependent. Hence, it is hard to construct a table that will yield a consistent image quality performance which is independent of the image. Also, to improve the subjective quality of the image it is sometimes necessary to construct context dependent templates. For example speci c templates are needed for situations in which the k-dimensional block has an edge, and di erent templates should be considered for situations

292

7. Color Image Compression

where the block is a shade [11]. This inevitably increases the size of the template table and with it the computational complexity and bit rate.

7.5.2 Transform Domain Methodologies Transform domain coding methods have become by far the most popular and widely used conventional compression techniques. In this type of coding the image is transformed into an equivalent image representation. Common linear transformations that are used in transform coding are the Karhunen-Loeve (KL), discrete Fourier transform (DFT), discrete cosine transform (DCT), and others. The main advantages in this kind of representation is that the transformed coecients are fairly de-correlated and most of the energy, therefore most of the information, of the image is concentrated in only a small number of these coecients. Hence, by proper selection of these few important coecients, the image can be greatly compressed. There are two transform domain coding techniques that warrant special attention. These two techniques are the discrete cosine transform (DCT) coding and subband coding [32]. The DCT transform and the JPEG compression standard. Of the many linear transforms known, the DCT has become the most widely used. The two dimensional DCT pair (forward and inverse transform), used for image compression, can be expressed as follows [34], [31], [33]:

C (u; v) = N2

NX ;1 NX ;1



 

f (x; y) cos (2x 2+N1)u cos (2y 2+N1)v x=0 y=0



(7.5)

for u, v = 0; 1; :::; N ; 1 (for u; v = 0, the scaling factor is N1 )

 (2x + 1)u   (2y + 1)v  NX ;1 NX ;1 2 C (u; v) cos cos (7.6) f (x; y) = N 2N 2N u=0 v=0

for x; y = 0; 1; :::; N ; 1 (for x; y = 0, the scaling factor is N1 ) In principle, DCT introduces no loss to the original image samples. It simply transforms the image pixels to a domain in which they can be more eciently encoded. In other words, if there are no additional steps, such as quantization of the coecients, the original image block can be recovered exactly. However, as it can be seen from (7.5) and (7.6) the calculations contain transcendental functions. Therefore, no nite time implementation can compute them with perfect accuracy. Because of the nite precision used for the DCT inputs and outputs, coecients calculated by di erent algorithms or by discrete implementations of the same algorithm will result in slightly di erent output for identical input. Nevertheless, DCT o ers a good and practical compromise between information packing abilities, that is packing a lot of

7.5 Lossy Waveform-based Image Compression Techniques

293

information into a small number of coecients, computational complexity, and minimization of block artifact image distortion [26]. These attributes are what prompted the International Standards Organization (ISO) and the Joint Photographic Expert Group (JPEG) to base their international standard for still image compression on the DCT. The JPEG standard is used for compressing and decompressing continuous tone monochrome as well as color images. Applications range from compressing images for audio-graphical presentations to desktop publishing, to multimedia database browsing and tele-medicine. JPEG is of reasonably low computational complexity, is capable of producing compressed images of high quality and can provide both lossless and lossy compression of arbitrary sized images. JPEG converts a block of an image in the time domain into the frequency domain using the DCT transform. Since the human vision system is not sensitive to high spatial frequencies, coarser quantization levels can be used to generate a rough representation of the high spatial frequency portion of the image. Because the coarser representation requires a fewer number of bits, the process reduces the amount of information needed to be stored or communicated. The JPEG standard does not specify any speci c color model to be used for the color image representation. However, in most cases JPEG handles colors as independent components so that each component can be processed as a monochrome image. The necessary color space transforms were performed before and after the JPEG algorithm. As there are many ways to represent color images, the standard does not specify any color space for the representation of the color images. Currently the JPEG standard is set up for use with any three-variate color space. Common color representations used in conjunction with the standard include color models, such as the linear RGB, the YIQ, the YUV and the YCb Cr color spaces. Experimentation with di erent color spaces indicate that tristimulus color models are not very ecient for use as a color compression space. For example, the major weakness of the linear RGB color space, from a compression point of view, is the spectral redundancy in the three channels. Simulation studies had revealed that the color information is encoded much less eciently for the RGB color space than it is for other color spaces. Similarly, studies show that perceptually uniform color spaces, such as the CIE L ab space, are good color compression spaces. Color spaces derived linearly form RGB, such as the YIQ, YUV and YCb Cr also provide excellent results. On the contrary, perceptually motivated spaces, such as the HVS, do not constitute an ecient color space for compression purposes. The poor performance should be attributed mainly in the poor quantization of the hue values using default quantization tables [24]. In summary, it can be said that the JPEG algorithm is a color space dependent procedure and that both numerical measures and psychological techniques indicate that uncorrelated color spaces, such as the YCb Cr should be used to maximize the coding again.

294

7. Color Image Compression

The major objective of the JPEG committee was to establish a basic compression technique for use throughout industry. For that reason the JPEG standard was constructed to be compatible with all the various types of hardware and software that would be used for image compression. To accomplish this task a baseline JPEG algorithm was developed. Changes could be made to the baseline algorithm according to individual users' preference but only the baseline algorithm would be universally implemented and utilized. Compression ratios that range from 5:1 to 32:1 can be obtained using this method, depending on the desired quality of the reconstructed image and the speci c characteristics of the image. The JPEG provides four encoding processes for applications with communications or storage constraints [3]. Namely, 1. Sequential mode. In the JPEG sequential mode or baseline system the color image is encoded in a raster scan manner from left to right and top to bottom. It uses a single pass through the data to encode the image and employs 8-bit representation per channel for each input. 2. Lossless mode. An exact replica of the original color image can be obtained using the JPEG lossless mode. This mode is intended for applications requiring lossless compression, such as medical systems where scans are stored, indexed, accessed and transmitted from site to site on demand, and multimedia systems processing photographs for accident claims, banking forms or insurance claims. In this mode the image pixels are handled separately. Each pixel is predicted based on three adjacent pixels using one of eight possible predictor models. An entropy encoder is then used to losslessly encode the predicted pixels. 3. Progressive mode. The color image is encoded in multiple scans and each scan improves the quality of the reconstructed image be encoding additional information. Progressive encoding depends on being able to store the quantized DCT coecients for an entire image. There are two forms of progressive encoding for JPEG. The spectral selection and the successive approximation methodologies. In the rst approach the image is encoded from a low frequency representation to a high frequency sharp image. In JPEG spectral selection progressive mode the image is transformed to the frequency domain using the DCT transform. The initial transmission sends low frequency DCT coecients followed by the higher frequency coecients until all the DCT coecients have been transmitted. Reconstructed images from the early scans are blurred since each image lacks the high frequency components until the end layers are transmitted. In the JPEG successive approximation mode all the DCT coecients for each image block are sent in each scan. However, only the most signi cant bits of each coecient are sent in the rst scan, followed by the next most signi cant bits until all the bits are sent. The resulting reconstructed images are of reasonably good quality, even for the very early scans, since the high-frequency components of the image

7.5 Lossy Waveform-based Image Compression Techniques

295

are preserved in all scans. The progressive mode is ideal for transmitting images over bandwidth limited communication channels since end-users can view the coarse version of the image rst and then decide if a ner version is necessary. Progressive mode is also convenient for browsing applications in electronic commerce or real estate applications where a low resolution image is more than adequate if the property is of no interest to the customer. 4. Hierarchical mode. The color image is encoded at multiple resolutions. In a JPEG hierarchical mode the low resolution image is used as the basis for encoding a higher resolution of the same image by encoding the di erence between the interpolated low resolution and higher resolution versions. Lower resolution versions can be accessed without rst having to reconstruct the full resolution image. The di erent resolution modes can be achieved by ltering and down sampling the image, usually in multiples of two in each dimension. The resulting decoded image is up sampled and from the next level, which is then coded and transmitted as the next layer. The process is repeated until all layers have been coded and transmitted. The hierarchical mode can be used to optimize equipments with di erent resolutions and display capabilities. JPEG utilizes a methodology based on DCT for compression. It is a symmetrical process with the same complexity for coding and decoding. The baseline JPEG algorithm is composed of three compression steps and three decompression steps. The compression procedure as speci ed by the JPEG standard is as follows [34]:  Each color image pixel is transformed to three color values corresponding to luminance and two chrominance signals, e.g. YCb Cr . Each transformed chrominance channel is down sampled by a predetermined factor.  The transform is performed on a sub-block of each channel image rather than on the entire image. The block size chosen by the JPEG standard is 88 pixels resulting in 64 coecients after the transform is applied. The blocks are typically inputed block-by-block from left-to-right and then block row by block row top-to-bottom.  The resultant 64 coecients are quantized according to a prede ned table. Di erent quantization tables are used for each color component of an image. Tables 7.4 and 7.5 are typical quantization tables for the luminance and chrominance components used in the JPEG standard. The quantization is an irreversible lossy compression operation in the DCT domain. The extent of this quantization is what determines the eventual compression ratio. This quantization controls the bit accuracy of the respective coecients and therefore determines the degree of image degradation, both objective and subjective. Because much of the block's energy is contained in the direct current, zero frequency, (DC) coecient, this coecient receives the highest quantization precision. Other coecients that hold little of the block's energy can be discarded altogether.

296

7. Color Image Compression

Table 7.4. Quantization table for the luminance component 16 11 14 14 18 24 49 72

11 12 13 17 22 35 64 92

10 14 16 22 37 55 78 95

16 19 24 29 56 64 87 98

24 26 40 51 68 81 103 112

40 58 57 87 109 104 121 100

51 60 69 80 103 113 120 103

61 55 56 62 77 92 101 99

Table 7.5. Quantization table for the chrominance components 17 18 24 47 99 99 99 99

18 21 26 66 99 99 99 99

24 26 56 99 99 99 99 99

47 66 99 99 99 99 99 99

99 99 99 99 99 99 99 99

99 99 99 99 99 99 99 99

99 99 99 99 99 99 99 99

99 99 99 99 99 99 99 99

 After quantization only the low frequency portion of the block contains

non-zero coecients. In order to reduce the number of bits required for storage and communication, as many zeros as possible are placed together so that rather than dealing with each individual zero, representation is in terms of the number of zeros. This representation is accomplished through the zig-zag scan shown in Fig. 7.1. The ordering converts the matrix of transform coecients into a sequence of coecients along the line of increasing spatial frequency magnitude. The scan pertains only to the 63 AC coecients. In other words it omits the DC coecient in the upper left corner of the diagram. The DC coecient represents the average sample value in the block and is predicted from the previously encoded block to save bits. Only the di erence from the previous DC coecient is encoded, a value much smaller than the absolute value of the coecient. The quantized coecients are encoded using an entropy coding method, typically Hu man coding, to achieve further compression [34]. JPEG provides the Hu man code tables used with DC and AC coecients for both luminance and chrominance. For hierarchical or lossless coding, arithmetic coding tables can be used instead of Hu man coding tables. Once encoded, the coecients are transmitted to the receiver where they are decoded, and an inverse transformation is performed on them to obtain the reconstructed image.

7.5 Lossy Waveform-based Image Compression Techniques F(0,0)

F(7,0)

F(0,7)

F(7,7)

297

Fig. 7.1. The zig-zag scan These steps should be repeated until the entire image is in a compressed form. At this point the image can be stored or transmitted as needed. The overall scheme is depicted in Fig. 7.2 The steps in the decompression part of the standard are: (i) decoding the bit stream, (ii) de-quantization, (iii) transforming from a frequency domain back to a spatial image representation, (iv) up-sampling each chrominance channel, and (v) inverse transformation of each color pixel to recover the reconstructed color image. De-quantization is performed by multiplying the coecients by the respective quantization step. The basic unit is an (88) block. The values of the pixels in the individual image blocks are reconstructed via the inverse discrete cosine transformation (IDCT) of (7.6). When the last three steps have been repeated for all the data, an image will be reconstructed. To illustrate the preceding discussion, Fig. 7.3-7.8 show the original RGB color image `Peppers' and results coded with JPEG at di erent quality levels. The distortions introduced by the coder at lower quality level are obvious. Recently, a new standard, the so called JPEG 2000, was introduced as an attempt to focus existing research e orts in the area of still color image compression. The new standard is intended to provide low bit rate operation with subjective image quality superior to existing standards, without sacri cing performance at higher bit rates. The scope of JPEG 2000 includes not only potential new compression algorithms but also exible compression architectures and formats. Although it will be completed by the year 2000, it will o er state-of-the art compression for many years beyond. It will also serve image compression needs that are currently not served and it will provide

298

7. Color Image Compression

Source Image

forward DCT

Quantization Table

Qunatizer

zig-zag scanning DC coefficients

DPCM

AC coefficients

Entropy coding

Entropy

tables

Encoder

Compressed Image Data

Fig. 7.2. DCT based coding

access to markets that currently do not consider compression as useful for their applications. It is anticipated that the new standard will address open issues, such as [4]:  Variable image formats. The current JPEG standard does not allow large image sizes. However, with the lowering cost of display technologies visual information will be widely available in the HDTV format and thus the compression standards should support such representation.  Content-based description. Visual information is dicult to handle both in terms of its size and the scarcity of tools available for navigation and retrieval. A key problem is the e ective representation of this data in an environment in which users from di erent backgrounds can retrieve and handle information without specialized training. A content-based approach based on visual indices, such as color, shape and texture seem to be a natural choice. Such an approach might be available as part of the evolving JPEG-2000 standard.

7.5 Lossy Waveform-based Image Compression Techniques

299

Fig. 7.3. Original color image `Pep-

Fig. 7.4. Image coded at a compres-

Fig. 7.5. Image coded at a compres-

Fig. 7.6. Image coded at a compres-

Fig. 7.7. Image coded at a compres-

Fig. 7.8. Image coded at a compres-

pers'

sion ratio 6 : 1

sion ratio 6:35 : 1

sion ratio 5 : 1

sion ratio 6:3 : 1

sion ratio 6:75 : 1

300

7. Color Image Compression

 Low bit rate compression. The performance of the current JPEG stan-









 

dard is unacceptable in very low bit rates mainly due to the distortions introduced by the transformation module. It is anticipated that research will be undertaken in order to guarantee that the new standard will provide excellent compression performance in very low bit rate applications. Progressive transmission by pixel accuracy and resolution. Progressive transmission that allows images to be reconstructed with increasing pixel accuracy or spatial resolution as more bits are received is essential in many emerging applications, such as the World Wide Web, image archiving and high resolution color printers. This new feature allows the reconstruction of images with di erent resolutions and pixel accuracy, as needed and desired, for di erent target and devices. Open architecture. JPEG 2000 follows an open architecture design in order to optimize the system for di erent image types and applications. To this end, research is focused in the development of new highly exible coding schemes and the development of a structure which should allow the dissemination and integration of those new compression tools. With this capability, the end-user can select tools appropriate to the application and provide for future growth. This feature allows for a decoder that is only required to implement the core tool set plus a parser that understands and executes downloadable software in the bit stream. If needed, unknown tools are requested by the decoder and sent from the source. Robustness to bit errors. JPEG 2000 is designed to provide robustness against bit errors. One application where this is important is wireless communication channels. Some portions of the bit stream may be more important than others in determining decoded image quality. Proper design of the bit stream can prevent catastrophic decoding failures. Usage of con nement, or concealment, restart capabilities, or source-channel coding schemes can help minimize the e ects of bit error. Protective image security. Protection of the property rights of a digital image is of paramount importance in emerging multimedia applications, such as web-based networks and electronic commerce. The new standard should protect digital images by utilizing one or more of four methods, namely: (i) watermarking, (ii) labeling, (iii) stamping, and (iv) encryption. All of these methods should be applied to the whole image le or limited to part of it to avoid unauthorized use of the image. Backwards compatibility It is desirable for JPEG 2000 to provide for backwards compatibility with the current JPEG standard. Interface with MPEG-4. It is anticipated that the JPEG 2000 compression suite will be provided with an appropriate interface allowing the interchange and the integration of the still image coding tools into the framework of content-based video standards, such as MPEG-4 and MPEG7.

7.5 Lossy Waveform-based Image Compression Techniques

301

In summary, the proposed compression standard for still color images includes many modern features in order to provide low bit rate operation with subjective image quality performance superior to existing standards. By taking advantage of new technologies the standard is intended to advance standardized image coding systems to serve applications into the next millennium [4]. Subband coding techniques. Subband coding of images has been the subject of intensive research in the last few years [14], [35], [36]. This coding scheme divides the frequency representation of the image into several bands. Selection of bands is done by using a bank of bandpass lters. This scheme is similar to the DCT coding in that it divides the image's spectrum into frequency bands and then codes and transmits the bands according to the portion of the image's energy that they contain. However, implementation of subband coding is done via actual passband lters while, DCT is done computationally using a discrete linear transform. This method of implementation could a ect the performance of subband coding in terms of complexity, matching to the human perceptual system and robustness to transmission error [21]. h(n)

2

2

h’(n)

f(x)

+

g(n)

2

2

f’(x)

g’(n)

Fig. 7.9. Subband coding scheme Speci cally, most emphasis on subband coding techniques is given to the wavelet decomposition, a subset of subband decomposition, in which the transformed representation provides a multiresolution data structure [15], [37]. The rst step in the wavelet scheme is to perform a wavelet transformation of the image. One of the more practical ways of implementing the wavelet transform is to carry out a multiresolution analysis (MRA) decomposition on the image. MRA performs the decomposition by an iterative process of lowpass, and high-pass ltering, followed by sub-sampling the resultant output signals. This type of iterative process yields a pyramid-like structure of signal components, which includes a single low-resolution component, and a series of added detail components, which can be used to perfectly reconstruct the original image.

302

7. Color Image Compression

The scheme is shown in Fig. 7.9. The actual decomposition algorithm is based on the following classical methodology: 1. Starting with the actual image, row low-pass ltering is performed, using the low-pass lter g(n), by means of convolution operations. 2. The above is followed by performing column low-pass ltering on the low-passed rows to produce the ll subimage. 3. Column high-pass ltering, using the high-pass lter h(n), is now performed on the low-passed rows to produce the lh subimage. 4. Row high-pass ltering, using h(n), is performed on the input image. 5. Column low-pass ltering is performed on the high-passed rows to produce the hl subimage. 6. Column high-pass ltering is performed on the high-passed rows to produce the hh subimage. 7. The entire procedure is now repeated (l ; 1) more times, where l is the speci ed number of desired decomposition levels on the resultant ll subimage. In other words, the ll subimage now serves as the input image for the next decomposition level.

V_i V_(i+1) W_(i+1)

Fig. 7.10. Relationship between di erent scale subspaces

The MRA decomposition is implemented as a linear convolution procedure using a particular wavelet lter. It is depicted in Fig. 7.11. Since it is not possible to know a priori which lter would be the best basis in terms of information compactness for the image, the wavelet scheme must try to nd the best lter by essentially trying out each of the available lters, and selecting the lter that gives the smallest number of non-zero coecients (by extension, that lter will very often also result in the highest compression ratio). The chosen lter is consequently used in the MRA decomposition procedure. In spite of the simplicity and straightforwardness of the MRA decomposition algorithm, there are two critical choices that have to made with respect to the algorithm, that greatly a ect compression performance for a given image. These two choices, or factors, are the choice of wavelet lter, and the

7.5 Lossy Waveform-based Image Compression Techniques h(n)

2

h(n)

f(x)

g(n)

303

2

2

g(n)

h(n)

2

g(n)

2

2

Fig. 7.11. Multiresolution analysis decomposition number of MRA decomposition levels. The most crucial aspect of carrying out the wavelet transform is the choice of wavelet lter. Unlike the JPEG, and other Block Transform schemes where the transformation is performed onto one particular basis, the DCT base in JPEG, in the wavelet transform there is not a clearly de ned base onto which every image is transformed. Rather, every wavelet lter represents a di erent base. A method for calculating the optimal base, and with it the lter coecients, is to select the `best' (rather than optimal) wavelet lter from a reservoir of available lters. Another very crucial consideration in the implementation of the MRA Decomposition procedure is the number of decomposition levels that are to be used. In this coding scheme the ll subimage is coded using a lossless scheme that does not compress the subimage to a high degree. That means that a large ll component will adversely a ect the achieved compression ratio. On the other hand, a small ll subimage will adversely a ect the resultant image quality. Once the wavelet transform representation is obtained, a quantizer is used to quantize the coecients. The quantization levels can be xed or they can be determined adaptively according to the perceptual importance of the coecients, and according to the complexity of the given image (images with a higher complexity normally have to be quantized more coarsely to achieve reasonable bit rates). The use of human visual system properties to quantize the wavelet coecients enables the scheme to coarsely quantize coecients which are visually unimportant. In many of the cases, those visually unimportant coecients are simply set to zero. By contrast, wavelet coecients which are deemed to be visually important are quantized more nely. The quantization and the overall reduced number of wavelet coecients ultimately give better compression ratios, at very high image quality levels. The actual coding stage follows the quantization step. The coding stage consists of di erential pulse code modulation (DPCM) to code the (ll) subimage, and a zero run-length coder to code the added detail wavelet coecients. DPCM, which is a lossless coding technique, is used in order to preserve the (ll) subimage perfectly. The run-length scheme, on the other hand, is ideally suited for coding data in which many of the coecients are zero-valued. The DPCM / zero run-length coding combination achieves bit rates that are slightly better than the bit rates achieved by JPEG.

304

7. Color Image Compression

A completely reverse process takes place at the receiver's end. The coded data stream is decoded, a DPCM decoder for the (ll) subimage, a run-length decoder for the detail coecients. The wavelet transform is reconstructed, and an inverse Wavelet transform is applied on it to obtain the reconstructed image. The overall scheme is depicted in Fig. 7.12. Coding Module Determine ‘best’ Filter, and Perform Wavelet Transform

Determine Quantization Step-Size, and Quantize Wavelet Coefficients

DPCM ll subimage

RLE Detail

Decoding Module DPCM ll subimage

RLE Detail

Inverse Wavelet Transform

Fig. 7.12. The wavelet-based scheme

7.6 Second Generation Image Compression Techniques The main characteristic of rst generation techniques is that most of the emphasis is placed on deciding how to code the image. In contrast, in second generation or model based techniques the emphasis is placed on deciding what should be coded, with the choice of coding the information becoming a secondary issue [17]. Hence, the methodology of second generation techniques can be broken down into two parts (as seen in Fig. 7.13), where the rst part selects the information from the image (Message Select module), and the second part codes the selected messages (Message Coder module). It was mentioned before that the human visual system (HVS) perceives visual information in a very selective manner. That is, the HVS picks up speci c features from the overall image that it perceives. Therefore, second generation techniques can be very useful for perceptually lossless coding since they can select features that are more relevant to the HVS and then code those features. INPUT IMAGE

MESSAGE SELECTOR

CODE WORD ASSIGNMENT

Fig. 7.13. Second generation coding schemes

CODED SIGNAL

7.6 Second Generation Image Compression Techniques

305

In general, second generation techniques will pre-process an image in an attempt to extract visual primitives such as contours, and the textural contents surrounding the contours. Since contours and textures can be coded very eciently, compression ratios in excess of 50:1 can be obtained. Below is a short rundown of some common second generation techniques. 1. Pyramidal coding. An image is successively passed through a low-pass lter a number of times. At each iteration, the error between the resultant low-pass image and the initial image is found. A low-pass ltering operation is now performed on the resultant image from the previous step and again the output image is used to nd error between the output and input images of that stage. This recursive relationship can be expressed as: ep (m; n) = xp;1 (m; n) ; xp (m; n); for p = 1; 2; ::; P (7.7) The error values at each iteration constitute high frequency information. Since the human vision system prefers high frequency information, but at the same time it does not have a high contrast sensitivity to it, a small number of bits/pixel are required to code the error information. Also coded is the low frequency information xP (m; n) which does not require a large number of bits to code. This technique achieves a modest compression ratio of 10:1, but with perceptually lossless image quality. 2. Visual pattern image coding. VPIC compression technique is similar to VQ in that the technique attempts to match a block to a pattern from a pre-de ned set of patterns, and then transmit the index corresponding to that pattern. The main di erence between the two is in the principle used to match the pattern. In VQ, an arbitrary block is matched to the pattern closest to it in some error sense, usually MSE. In VPIC, a block is broken down to its low-pass component which is just the intensity average of the block, and into its spatial variation, or edge information (the high frequency component). Therefore, the mapping criterion adheres to the behavior of the HVS and not to some absolute mathematical error measure. The edge information is then matched to edge patterns from a pre-de ned table. The information transmitted is then the average intensity of the block along with an index which corresponds to the pattern selected. This technique is characterized by very low complexity, high compression ratio (11:1-20:1), and excellent image quality [38]. 3. Region growing based coding. In this technique the image is rst segmented into a collection of closed contours that are perceptible to human observers. Once the image is completely partitioned, the closed contours and the visual attributes, such as color and texture inside the closed segments are coded separately and transmitted. Ecient coding of the contours and the visual contents can translate into impressive compression ratios (in excess of 70:1). The image quality then becomes a function of how coarse the segmentation process and coding processes

306

7. Color Image Compression

are. The di erent algorithms and methodologies discussed in Chap. 6 can be used to guide the coding process. 4. Fractal coding. Fractal coding also operates on blocks rather than on the image as a whole. The main idea in fractal coding is to extract the basic geometrical properties of a block. This extraction is done by means of applying contractive ane transformation on the image blocks. Fractal image compression is based on the observation that all real-world images are rich in ane redundancy. That is, under suitable ane transformations, larger blocks of the image look like smaller blocks in the same image. These ane maps give a compact representation of the original image and are used to regenerate that image, usually with some amount of loss. Therefore, a fractal compressed image is represented in terms of the self similarity of essential features and not in terms of pixel resolution. This is a unique property of the fractal transform and therefore an image can be represented in any resolution without encountering the artifacts that are prevalent when using transform based techniques, such as JPEG. Most fractal image coding techniques are based on iterated function systems (IFS) [39]. An IFS is a set of transformations, each of which represents the relationship between a part of the image and the entire image. The objective of the coding scheme is to partition the image into several subimages and nd transformations that can map the entire image into these subimages. When these transformations are found, they represent the entire image. In this way, images with global self-similarity can be coded eciently. However, it is dicult to nd such transformations in real life images since natural images are rarely globally self-similar. To this end, a coding scheme based on the so-called partitioned IFS technique was proposed in [40]. In the partitioned IFS approach the objective is to nd transformations that map a part of the image into another part of the image. Such transformations can easily be found in natural images. However, the compression ratio of the partitioned IFS is not as high as that of the direct IFS coding scheme. Fractal compression techniques that can be implemented in software are resolution independent and can achieve high compression eciency [41]. However, unlike the DCT based compression algorithms which are symmetric, with decompression being the reverse of compression in terms of computational complexity, fractal compression is computationally intensive, while decompression is simple and so fast that it can be performed using software alone. This is because encoding involves many transformations and comparisons to search for a set of fractals while the decoder simply generates images according to the fractal transformation received. These features make fractal coding well suited to CD-ROM mass storage systems and HDTV broadcasting systems. In summary, the main advantages of fractal based coding schemes are the large compression eciency

7.7 Perceptually Motivated Compression Techniques

307

(up to 40:1), with usually a relatively good image quality, and resolution independence. The main disadvantage of the scheme is its complexity in terms of the computational e ort [20].

7.7 Perceptually Motivated Compression Techniques As was described in the previous sections, eciency, in terms of the achieved bit rate or compression ratio, and image quality are dependent on each other. Lower bit rates can be achieved at the expense of a higher distortion. The main problem in devising ecient image compression techniques is that it is not clear what distortion measures should be used. Traditional objective distortion measures, such as the MSE, do not appear to be very useful in establishing an accurate relationship between eciency and image quality. This is because objective distortion measures do not correlate well with the distortion perceived by the human visual system (HVS). That is, low MSE distortion might result in degraded images which human observers will not nd pleasing, and vice versa. Therefore, in order to improve the performance of image compression techniques it is rst necessary to get a better understanding of the human visual system. This section will describe some of the important features of the human visual system that have a direct impact on how images are perceived. Once the human visual system is better understood, its features and behavior can be more successfully incorporated into various compression methods.

7.7.1 Modeling the Human Visual System Perhaps the most dicult part in designing an e ective compression method is coming up with a good robust model for the HVS. The diculty arises because of the complexity and multi-facet behavior of of the HVS and human perception. Although a model that accounts for all aspects of the HVS is not available, a simpli ed model that attempts to approximate and explain the behavior of the human visual system exists. The general HVS model is presented in Fig. 7.14 LOW-PASS FILTER

LOGARITHMIC POINT TRANSFORMATION

HIGH-PASS

DETECTION

FILTER

MODULE

Fig. 7.14. The human visual system This simpli ed model of the HVS consists of four components: a low-pass lter, a logarithmic point transformation to account for some of the nonlinearities of the HVS, a high-pass lter and a detection module. The lowpass ltering is the rst operation that the HVS performs. This operation

308

7. Color Image Compression

corresponds to ltering done by the optical system before the visual information is converted to neural signals [12]. The logarithmic point transformation module models the system's ability to operate over a large intensity range. The high-pass lter block relates to the `lateral inhibition' phenomenon and comes about from the interconnections of the various receptor regions (in lateral inhibition the excitation of a light sensor inhibits the excitation of a neighbor sensor) [12]. These three blocks model elements of the HVS that are more physical in nature. More speci cally, both the low-pass and highpass ltering operations arise because of the actual physiological structure of the eye, while the need to model the logarithmic non-linearity relates to the physiological ability of the eye to adapt to a huge light intensity range. These operations are relatively straightforward and are therefore easy to represent by this model. The detection module, on the other hand, is considerably harder to model since its functions are more psychophysical in nature. Even though it is extremely hard to accurately and completely model the detection block, an attempt should be made to include as many human perceptual features as possible in such a model. Examples of some of those features are feedback from higher to lower levels in perception, interaction between audio and visual channels, descriptions of non-linear behavior and peripheral, and other high-level e ects [42]. At this point, it is of course not possible to include all of the above features. However, some human perceptual phenomena on which more is known can be incorporated into the detection model and later be used in image coding. Speci cally, there are four dimensions of operations that are relevant to perceptual image coding. These are: (i) intensity, (ii) color, (iii) variation in spatial detail, and (iv) variation in temporal detail. Since the focus of this section is on compression of still images, the rst three properties are of more importance. A good starting point for devising a model for the detection block is recognizing that the perceptual process is actually made of two distinct steps. In the rst step, the HVS performs a spatial band-pass ltering operation [42]. This operation does, in fact, accurately model and explain the spatial frequency response curve of the eye. The curve shows, that the eye has varying sensitivity response to di erent spatial frequencies, and thus the human visual system itself splits an image into several bands before processing it, rather than processing the image as a whole. The second step is what is referred to as noise-masking or perceptual distortion threshold. Noise-masking can be de ned as perceptibility of one signal in the presence of another in its time or frequency vicinity [12]. As the name implies, distortion of an image which is below some perceptual threshold can not be detected by the detection block of the human eye. This perceptual threshold, or more precisely, the point at which a distortion will become noticeable, is the so called 'just noticeable distortion' (JND). Following the perceptual distortion processing, the image can be encoded in a manner that considers only information that exceeds the JND threshold. This step is referred to as perceptual entropy. Perceptual

7.7 Perceptually Motivated Compression Techniques

309

entropy coding used alone will produce perceptually lossless image quality. A more general but exible extension of the JND is the minimally noticeable distortion (MND). Again, as the name suggest, coding an image using an MND threshold will result in a noticeable distortion, but will reduce the bit rate [42]. Next, a few well known perceptual distortion threshold phenomena will be described. These phenomena relate to intensity and variation in spatial detail, which are two of the features that can be incorporated into the image detection and encoding step. Speci cally: 1. Intensity. The human eye can only distinguish a small set of intensities out of a range at any given point. Moreover, the ability to detect a particular intensity level depends almost exclusively on the background intensity. Even within that small range the eye cannot detect every possible intensity. In fact, it turns out that a small variation in intensity between the target area and the surrounding area of the image cannot be noticed. In e ect, the surrounding area masks small variations in intensities of the target area. More speci cally, if the surrounding area has the same intensity as the background (i.e. L = LB where L denotes the intensity of the surrounding area and LB denotes the background intensity) then the just noticeable distortion in intensity variation, L, is about 2% of the surrounding area intensity [12]. Mathematically, this relation can be expressed as:

L  2% L

(7.8)

The above ratio is known as the 'Weber Ratio'. This ratio and the JND contrast threshold increases if L is not equal to LB or if L is particularly high or low. The implications of this for perceptual image coding are that small variations in intensity of a target area relative to its neighbors do not have much importance since the human visual system will not be able to detect these small variations. This property can lend itself nicely for reducing perceptual entropy and the bit rate. 2. Color. The human visual system is less sensitive to chrominance than to luminance. When color images are represented as luminance and chrominance components, for example YCb Cr , the chrominance Cb , Cr can be coded coarsely and fewer bits used. That is to say, the chrominance components can be sub-sampled at a higher ratio and quantized more coarsely. Despite its simplicity the method is quite ecient and it is widely used as preprocessing step, prior to applying spatial and temporal compression methods in coding standards, such as JPEG and MPEG. 3. Variation in spatial detail. Two other masking properties that can be useful for perceptual image coding relate to the ability of the eye to detect variation in spatial detail. These two properties are the simultaneous contrast and Mach bands e ects and both occur as a result of the lateral

310

7. Color Image Compression

inhibition phenomenon. In the simultaneous contrast phenomenon, the perceived brightness of a target area changes as the luminance (or intensity) of the surrounding area changes. The target area appears to become darker as the surrounding area becomes brighter, and vice versa, the target area appears to become brighter as the surrounding area becomes darker [26]. Also, if the illumination on both the target and surrounding area is increased, then the target area will appear to have become brighter if the contrast (L) is low, but will appear to have become darker if the contrast is high. The Mach bands e ect refers to the eye's tendency to accentuate the actual contrast sharpness at boundaries or edges. That is, regions with a high constant luminance will cause a neighboring region of lower luminance to appear to have even lower luminance. Another example of this e ect is that when two regions of high contrast are separated by a transition region in which the luminance changes gradually and smoothly, the transition luminance levels will hardly be noticeable, while the two main regions will still appear to have a high contrast. This e ect illustrates the fact that the human eye prefers edge information and that transition regions between regions of high contrast are not detected. In the context of masking, it can be said that luminance values at transition regions are masked by the main boundary regions. Consequently, in lossy compression, edge information should be preserved since the human visual system is very sensitive to its presence, while transition regions do not have to be faithfully preserved and transmitted. These well known phenomena can be used to remove visual information that cannot be detected by the human visual system. The following is a summary of more speci c ways of how to use these and other properties to eciently encode still images, as well as image sequences. 1. Contrast sensitivity. This is one of the most obvious places where considerable bit rate reduction can be obtained. Human observers react more to high frequency information and sharp spatial variation (like edges). However, they cannot detect those spatial variations if the contrast, change in spatial variation, falls below a certain threshold. Also, it has been shown experimentally that the contrast sensitivity is a function of spatial frequency [12], [9]. Speci cally, the highest contrast sensitivity is for spatial frequencies at about 5-7 cycles/degree, with the sensitivity dropping o rapidly for higher spatial frequencies [9]. A good way to take advantage of this property would be to concentrate mainly on high frequency information and code it coarsely because of the low sensitivity to the exact value of high frequency information. 2. Dynamic contrast sensitivity. This property is an extension of the contrast sensitivity function to image sequences. Low resolution luminance have their highest sensitivity at low temporal frequencies, with the sensitivity rapidly falling o at about 20 Hz. This implies that less

7.7 Perceptually Motivated Compression Techniques

311

precision is required for encoding information for high temporal frequencies than is required at low temporal frequencies [9]. 3. Luminance masking. Another place where the bit rate can be reduced is by using the luminance masking. Since the eye cannot detect an intensity change L which is below the Weber ratio, areas in the image that have small intensity variations relative to the surrounding areas do not have to be faithfully or accurately transmitted. This property can be useful in coding low frequency information, where only a small number of bits would be needed to code the low frequency contents of a large image area. Lastly, to conclude this section, some image compression implementations in which the human visual system is incorporated will be described.

7.7.2 Perceptually Motivated DCT Image Coding The approach presented in [43] is essentially based on determining the appropriate quantization values for the quantization matrices so that they match well with the contrast sensitivity function (CSF). Normalizing the DCT coef cients will automatically eliminate small contrast variations, and will yield low bit rates. As was described earlier, the overall compression is determined by the extent of quantizing of each of the DCT coecients as de ned in a quantization table. After transforming an (nn) block of pixels to its DCT form, the DCT coecients are normalized using the normalization matrix, according to the relation:  T (u; v)  ^ T (u; v) = round Z (u; v) (7.9) where T (u; v) is the DCT coecient, Z (u; v) is the corresponding normalizing value, and T^(u; v) is the normalized DCT coecient. Since di erent DCT coecients have higher contrast sensitivity than other coecients, greater precision (in the form of more bits/coecient) is required, and their corresponding normalization value will be lower than that of the other less important coecients. For example, the suggested JPEG normalization table normalizes the low frequency coecients, such as the DC value by relatively small values. It now seems like a straightforward task to recompute the quantization values in accordance with the CSF. According to the rule, low quantization values, or more precision, should be assigned to those spatial frequencies to which the eye is more sensitive. However, the task is a bit more involved than that. To begin with, the CSF is based on the visibility of the human visual system to a full eld sinusoid. The DCT, on the other hand, is not completely physically compatible with the Fourier transform. In other words, in order to use the CSF with the DCT, a correction factor must be applied to the DCT

312

7. Color Image Compression

coecients [43], [44]. Another problem that complicates the DCT coding method based on CSF is that of sub-threshold summation. Namely, there are some situations in which some of the DCT frequencies might be below the contrast threshold as ascribed by the CSF, but the summation of these frequencies is very much visible. Other factors that have to be taken into account are the visibility of the DCT basis function due to the oblique e ect, the e ects of contrast masking, orientation masking, and the e ects of mean luminance, and the size of the pixel in the particular monitor being used [43]. By considering several of these e ects, quantization tables that are compatible with the human visual system were introduced. Tables 7.6 and 7.7 show the basic normalization tables, for the luminance component, suggested by JPEG next to the normalization table that incorporates the attributes of the human visual system.

Table 7.6. The JPEG suggested quantization table 16 12 14 14 18 24 49 72

11 12 13 17 22 35 64 92

10 14 16 22 37 55 78 95

16 19 24 29 56 64 87 98

24 26 40 51 68 81 103 112

40 58 57 87 109 104 121 100

51 60 69 80 103 113 120 103

61 55 56 62 77 92 101 99

Table 7.7. Quantization matrix based on the contrast sensitivity function for 1.0 min/pixel

10 12 14 19 26 38 57 86

12 18 21 28 35 41 54 76

14 21 25 32 44 63 92 136

19 28 32 41 54 75 107 157

26 35 44 54 70 95 132 190

38 41 63 75 95 125 170 239

57 54 92 107 132 170 227 312

86 76 136 157 190 239 312 419

With the above quantization table, the bit rate can be reduced from 8 bits/pixel to less than 0.5 bit/pixel, while maintaining very high, perceptually lossless, image quality. An important characteristic of the perceptually motivated coder is that all the perceptual overhead are done in the encoder only. The decoding performance of the perceptually motivated JPEG is the same as that for a baseline JPEG. Therefore, such an approach is ideal for decoding-heavy applications.

7.7 Perceptually Motivated Compression Techniques

313

A speci c example is in multimedia communications. With the continuous advancement of computer processing power and display device technology, and the rapidly increasing popularity of the Internet, visual information is now very much within reach for end-users. One characteristic of visual information over the Internet is that it is, in most cases, accessed by decoding-heavy applications or systems. For instance, front pages of information providers are accessed by millions of hits every day, but the images and video streams are only created once. The same is true for thousands of JPEG les and MPEG video clips on the Internet. A perceptually motivated scheme designed for reducing storage costs for image and video information encoded using transform based techniques, such as JPEG, or MPEG, o ers an attractive option.

7.7.3 Perceptually Motivated Wavelet-based Coding Application of the MRA decomposition stage on the image produces several wavelet coecient subimages and a single ll subimage which is a scaledup version of the original image. Although many of the wavelet coecients are zero-valued, the vast majority of them have a non-zero value. Hence, it becomes very inecient to try and compress the wavelet coecients using the Zero Run-Length coding technique, which is based on the premise that most of the coecients are zero-valued. Fortunately, many of the non-zero coecients do not contribute much to the overall perceptual quality of the image, and consequently can be greatly quantized, or discarded altogether. To achieve that, the wavelet coecient subimages are processed with a Processing Module that uses properties of the human visual system (HVS) to determine the extent of the quantization to be applied on a given wavelet coecient. Coecients which are visually insigni cant would ordinarily be quantized more coarsely (possibly being set to 0), while the visually signi cant coecients would be more nely quantized. As it was explained before, there are several common HVS properties that can be incorporated into the processing module. 1. The HVS exhibits relatively low sensitivity to high resolution bands, and has a heightened sensitivity to lower resolution bands. 2. Certain spatial features in an image are more important to the HVS than other. More speci cally, features such as edges and texture are visually more signi cant than background features that have a near constant value. 3. There are several masking properties that mask small perturbations in the image. A number of HVS based schemes to process wavelet coecients have been developed over the past few years. Most notably, an elegant method that combines the band sensitivity, luminance masking, texture masking, and edge height properties into a single formula that yields the quantization step-

314

7. Color Image Compression

size for a particular wavelet coecient was developed in [37]. The formula is given as: qstep (r; s; x; y) = q0  frequency(r; s)  luminance(r; x; y)  texture(r; x; y)0:034 (7.10) In the above equation, q0 is a normalization constant that can be a decomposition level, s represents the particular subimage within a decomposition level. For example hl; lh; hh, and x and y are the spatial coordinates within every subimage. The frequency, luminance and texture components are calculated as follows: 9  p2; if s = HH  8 < 1:00; if r = 0 = frequency(r; s) = 1; otherwise  : 0:32; if r = 1 ; (7.11) 0:16; if r = 2

luminance(r; x; y) =

1 X 1 1 X 3 + 256 I 2;ll (i + 1 + x=22;r ; j + 1 + y=22;r )

i=0 j =0

texture(r; x; y) =

2;r X

16;k

hh;lh;hl 1 X 1 X X

(7.12)

(I k+r;s (i + x=2k ; j + y=2k ))2

s i=0 j =0 k=1 +162;r var(I 2;ll (f1; 2g + x=22;r ; f1; 2g + y=22;r ))

(7.13) In (7.12) the notation (I 2;ll (x; y)) denotes the coecient values of the ll subimage at the third MRA decomposition level. These equations ((7.11)-(7.13)) are essentially heuristic formulas in the sense that they do not necessarily give optimal results, but rather give good results for most images (with respect to the image quality and the bit rate). Better image quality at the expense of the bit rate can always be obtained by altering the parameter values in the above equations. As it was pointed out in [46], the main problem with the method in [37] was the relatively high computational e ort involved in computing the quantization step size values. The method in [46] requires computation of the texture component, which as (7.13) shows, is based on all spatially related coecient values in the lower level subimages. An alternative way of computing the quantization step size, in which the texture component is not used, was proposed in [46]. Rather, computation of the quantization step size is based only on the luminance level and edge height associated with a particular wavelet coecient. Both the luminance level and edge height values are computed from the ll subimage, which greatly reduces the computational e ort [46]. The quantization step size is then calculated as: qstep (s; r; x; y)

7.7 Perceptually Motivated Compression Techniques

315

= q0  frequency(s; r)  minfBS (s; x; y); ES (s; x; y)g (7.14) In the above equation BS (s; x; y) is the background sensitivity function which is solely based on the luminance values derived from the ll subimage. Similarly, ES (s; x; y) is the sensitivity function which is solely based on the edge height values that are also derived from the ll subimage. Since computational eciency is a paramount consideration in [46], the quantization procedure used was similar to the methodology proposed. Several modi cations that enabled the overall performance of the wavelet scheme to exceed that of JPEG are also suggested in [46]. Like these methods, the implemented processing module also computes a quantization level for each wavelet coecient based on the local luminance level, and the edge height in the vicinity of the particular wavelet coecient. There are, however, two de ning di erences between the implemented method and method introduced in [46]. The rst di erence is that the implemented scheme does take into account the fact that sharp edges are visually more signi cant than other spatial features. This HVS property is incorporated into the scheme by quantizing visually insigni cant features more coarsely than visually signi cant features. The coarseness of the quantization is controlled through the normalization factor q0 . In other words, the scheme uses two normalization factors; one for edges, and one for non-edge features (normally referred to as background information). The second di erence is that the two normalization factors, q0edge , and q0back , are made into adaptive parameters that are dependent on the complexity of the particular image. More speci cally, it is found that for high complexity images, images with a lot of edge information, the normalization factors have to be increased in order to achieve compression ratios that are comparable to that of JPEG. The implemented processing module works as follows: 1. All the background luminance values are computed using the ll subimage pixel values. The luminance values are computed according to [37]: 1 luminance(xll ; yll ) = 3 + 256

1 X 1 X

i=0 j =0

I ll (xll + i; yll + j )

(7.15)

In the implemented scheme, the Processing Module stores the luminance values in memory, and then retrieves these values as they are needed. 2. All the edge height values are computed using the ll subimage pixel values. The edge height values are computed according to [46]: EH (xll ; yll ) = 0:37jDvertj + 0:37jDhorij + 0:26jDdiag j (7.16) where Dvert = I ll (xll ; yll ) ; I ll (xll ; yll + 1) Dhori = I ll (xll ; yll ) ; I ll (xll + 1; yll ) Ddiag = I ll (xll ; yll ) ; I ll (xll + 1; yll + 1)

316

7. Color Image Compression

In the suggested scheme, the processing module stores the edge height values in memory, and then retrieves these values as they are needed. 3. The next step is to determine the quantization parameter values that correspond to the particular image being compressed. Besides the quantization parameters q0edge and q0back which control the quantization values for the edges and background features respectively, an additional parameter qthresh is needed. Features with edge height values above this threshold value would be considered as edges. As was mentioned above, the quantization parameter values are adjusted to re ect the complexity of the particular image. Images with high complexity require parameters with large values in order to be compressed eciently. A good measure of an image complexity is provided by the number of wavelet coecients retained during the lter selection stage. Complex images invariably produce more retained coecients than simpler images. In determining what quantization parameter values to use for each image, the only guiding criterion is to nd the parameters which would give results that are better than what is achieved with JPEG. Hence, by a process of trial and error, the parameter values are continuously adjusted until the best results (PSNR and the corresponding compression ratio) for a particular image are obtained. A particular result is considered to be good if both the PSNR and compression ratio exceeded the JPEG values. For images where it is not possible to exceed the performance of JPEG, the best compromise of PSNR and compression ratio is used. Following this method of trial and error, the quantization parameter values for several trial images are determined. Using these manually determined parameter values, a linear function is derived for each parameter using a simple linear regression procedure. In each linear function, each quantization parameter is expressed as a function of the number of retained coecients. The three derived linear functions are given as:

qthresh = ;1:1308 + 0:0013  # retained coecients q0edge = 0:8170 + 0:00001079  # retained coecients q0back = 1:0223 + 0:00002263  # retained coecients

(7.17) (7.18) (7.19) 4. The last part of the processing stage is to process each of the wavelet coecients in the various detail subimage. The processing procedure is simple and takes place as follows: a) For a particular wavelet coecient, use the spatial coordinates of that coecient to nd the corresponding ll subimage spatial coordinates. Use the ll spatial coordinates (i.e., xll and yll ) to fetch the corresponding edge height value stored in memory. b) If the edge height value exceeds the qthresh parameter value, the coecient is an edge coecient. In that case, use the q0edge quantiza-

7.7 Perceptually Motivated Compression Techniques

317

tion parameter to calculate the quantization step size for the current wavelet coecient using the formula: qstep = 

oor q0edge  frequency(r; s)  luminance(xll; yll ) + 0:5 (7.20) where luminance(xll,yll ) is the luminance value calculated in the rst step. c) If the edge height value is lower than the qthresh parameter value, the coecient is a background coecient. In that case, use the q0back quantization parameter to calculate the quantization step size for the current wavelet coecient using the formula: qstep ;= 

oor q0back  frequency(r; s)  luminance(xll; yll ) + 0:5 (7.21) d) Quantize the wavelet coecient using qstep . The operation of the perceptual processing module is depicted in Fig. 7.15.

7.7.4 Perceptually Motivated Region-based Coding Region growing based coding is a second generation image compression technique that operates in the spatial domain [17], [18]. As such, the emphasis of this method is placed on initial selection of image information. The information selection part is followed by an ecient coding procedure. The technique is based on segmenting an image into contour regions in which the contrast variation is small. The texture in each segment is also coded in accordance to some error criterion. This segmentation is consistent with the human vision system behavior which prefers edge (or contour) information and cannot distinguish well between small contrast variations. The segmentation procedure is carried out by rst selecting a segmentation parameter which could be a speci c color, color variation, or any other appropriate measure of discrimination, such as texture. Because noise can adversely a ect a segmentation process based on color, it is desirable that the noise be rst removed by means of vector ltering. In this particular case the segmentation parameter is chosen to be color variation. Hence, the rule for segmenting the image would be that neighboring pixels that are within a certain color range will be grouped into the same segment. Depending on the compression ratio desired, it might become necessary to reduce the number of contour segments that are obtained at the end of this step. This can be achieved by joining neighboring segments to one another, or using higher threshold value for the segmentation parameter. The di erent procedures discussed in Chap. 6 can be utilized for this task. Once the color image is partitioned, the contours and texture of each segment have to be coded and transmitted. The contours themselves have to be carefully and accurately coded since the human visual system is particularly sensitive to edge information. In such an approach, contours are coded

318

7. Color Image Compression Calculate luminance and edge height values from ll subimage

edge

back Compute q 0 , q 0 , and q thresh from # of retained coefficients in the Filter Selection Module

The procedure is applied on each wavelet coefficient individually. EH(x ll , yll )

YES

>

( edge features)

edge

q step = floor ( q

0

q thresh ?

NO ( background area)

* frequency(r,s) * Luminance(x ll, y ll) + 0.5 )

q step = floor ( q back * frequency(r,s) * Luminance(x ll, y ll) + 0.5 ) 0

Quantize coefficient using

q step

Fig. 7.15. Overall operation of the processing module by using line and circle segments wherever possible. It should also be noted that adjacent segments will share contours, and therefore further coding reduction can be realized by coding these contours only once. Although the human visual system is less sensitive to textural variations than it is to the existence of contours, care should be taken that the textural contents of each segment are not overly distorted. The contrast variation within every segment is kept below the segmentation parameter. Therefore, it is usually enough to approximate the texture by using a 2-D polynomial. It is then enough to simply transmit the polynomial's coecients in order to reconstruct the shape of the texture inside every contour segment.

7.8 Color Video Compression

319

The technique gives varying degrees of compression ratios and image quality. Good image quality can be obtained at the expense of a larger bit rate by simply allowing for closed contour segments and higher order polynomials to approximate the textural contents within each segment. As an example, compression ratios of the order of 50:1 with relatively good image quality have been obtained using the proposed methodology.

7.8 Color Video Compression Compressing video signals means that the algorithm should have the ability to exploit temporal masking as well as spectral masking. In video coders, such as the industry standard MPEG, the components of digital color video signals are compressed separately with shared control and motion estimation mechanisms. Existing compression techniques for still images can serve as the basis for the development of color video coding techniques. However, digital video signals have an associated frame rate from 15 to 60 frames per second which provides the illusion of motion in the displayed signal. A moving object in a video sequence tends to mask the background that emerges when the object moves, making it easier to compress the part of the uncovered image. In addition, since most video objects move in predictable patterns the motion trajectory can be predicted and used to enhance the compression gain. Motion estimation is computationally expensive and only luminance pixels are regarded in the calculations. For a block of (1616) luminance pixels from the current frame the most similar block in the previous frame is searched for. Di erences in the coordinates of these blocks de ne the elements of the so called motion vector. The current frame is predicted from the previous frame with its blocks of data in all the color components shifted according to the motion vectors which have to be transmitted to the decoder as side information. Although still color images are sized primarily to t workstations equipped with (640480) VGA or (1024768) XVGA color monitors, video signals can be of many sizes. For example, the input video data for very low bit rate applications is composed of small sized color images in the quarter common intermediate format (QCIF) with (144176) pixels in luminance and a quarter of this resolution in the chrominance components. the frame rate for this application is approximately 5 to 15 frames per second. Medium bit rate video applications deal with images of average size, approximately (288352) pixels in luminance and a quarter of this resolution in chrominances at a frame rate of 25 or 30 frames per second. Alternatively the ITU-R 601 standard with interlaced (576720) pixels in luminance and half-horizontal resolution in chrominances is also used. A number of standards are available today. Depending on the intended application they can be de ned as:

320

7. Color Image Compression

1. Standards for video conferencing applications. This family includes the ITU standard H.261 for ISDN video conferencing, the H.263 standard for POTS video conferencing and the H.262 standard for ATM based, broad band video conferencing. H.261 is a video codec capable of operation at a ordable telecom bit rates. It is a motion-compensated, transform-based coding scheme, that utilizes (1616) macroblock motion compensation, (88) block DCT, scalar quantization and twodimensional run level, variable length, entropy coding. H.263 is designed to handle very low bit rate video with a target bit rate range of 1030 Kbits per second. The key technical features of H.263 are variable block size motion compensation, overlapped block motion compensation, picture extrapolating motion vectors, median-based motion vector prediction and more ecient header information signaling. 2. Standards for multimedia applications. This family includes the ISO MPEG-1 standard intended for storing movies on CD read-only memory with 1.2 Mb/s allocated to video coding and 256 Kb/s allocated to audio coding. The MPEG-2 standard was developed for storing broadcast video on DVD with 2 to 15 Mb/s allocated to video and audio coding. In the most recent member of the family, the emphasis has shifted from pixel coding to object-based coding at rates of 8Kb/s or lower and 1 Mb/s or higher. The MPEG-4 visual standard will include most technical features of the priori video and still image coding schemes and will also include a number of new features, such as wavelet-based coding of still images, segmented shape coding of objects and hybrids of synthetic and natural video coding. Most standards use versions of a motion compensated DCT-based block hybrid coder. The main idea is to combine transform coding, primarily in the form of the DCT 88 pixel blocks with predictive coding in the form of DPCM in order to reduce storage and computation of the compressed image. Since motion compensation is dicult to perform in the transform domain the rst step in the video coder is to create a motion compensated prediction error using macroblocks of (1616) pixels. The resulting error signal is transformed using a DCT, quantized by an adaptive quantizer, entropy encoded using a variable length coder, and bu ered for transmission over a xed rate channel. The MPEG family of standards is based on the above principle. The MPEG-1 system performs spatial coding using a DCT of (88) pixel blocks, quantizes the DCT coecients using xed or perceptually motivated tables, stores the DCT coecients using the zig-zag scan and process the coecients using variable run-length coding. Temporal coding is achieved by using uniand bi-directional motion compensated prediction with three types of frames. Namely, 1. Intraframe (I). The I-frames from a video sequence are compressed independently from all previous or future frames using using a procedure

7.8 Color Video Compression

321

similar to JPEG. The resulting coecients are passed through the inversed DCT transform in order to generate the reference frame, which is then stored in memory. This I frame is used for motion estimation for generating the P- and B- frames. 2. Predictive (P). The P-frames are coded based on the previous I-frames or P-frames. The motion compensated for forward predicted P-frame is generated using the motion vectors and the referenced frame. The DCT coecients from the di erence between the input P-frame and the predicted frame are quantized and coded using variable length and Hu man coding. The P-frame is generated by performing the inverse quantization, taking the inverse DCT of the di erence between the predicted frame and the input frame and nally adding this di erence to the forward predicted frame. 3. Bi-directionally frames (B). The B-frames are coded based on the next and/or the previous frames. The motion estimation module is used to bi-directionally estimate the motion vectors based on the nearest referenced I and P frames. The motion-compensated frame is generated using the pair of nearest referenced frames and the bi-directionally estimated motion vectors. The video coder generates a bit stream with variable bit rate. In order to match this bit rate to the channel capacity, the coder parameters are controlled according to the output bu er occupancy. Bit rate control is performed by adjusting parameters, such as the quantization step used in the DCT component and the distance between intra frame and predictive frames. The compression procedure as speci ed by the MPEG standard is as follows: 1. Preprocessing the input frames. Namely, color space conversion and spatial resolution adjustment. Frame types are decided for each input frame. If bi-directional frames are used in the video sequence, the frames are reordered. 2. Each frame is divided into macroblocks of (1616) pixels. Macroblocks in I-frames are intra coded. Macroblocks in P-frames are either intra coded or forward predictive coded based on previous I-frames or P-frames, depending on coding eciency. Macroblocks in B-frames are intra coded, forward predictive coded, backward predictive coded, or bi-directionally predictive coded. For predictive coded macroblocks motion vectors are found and predictive errors are calculated. 3. The intra coded macroblocks and the predictive errors of the predictive coded macroblocks are divided into six (4 luminance and 2 chrominance) blocks of (88) pixels each. Two-dimensional DCT is applied to each block to obtain transform coecients, which are quantized and zig-zag scanned.

322

7. Color Image Compression

4. The quantized transform coecients and overhead information, such as frame type, macroblock address and motion vectors are variable length coded using prede ned tables. The operation of the coding module is depicted in Fig. 7.16. The decoder is depicted in Fig. 7.17.

DCT

Q

RLC

EC

B

Source Image Sequence

-1 Q

IDCT

MCP

MV C

EC

Fig. 7.16. MPEG-1: Coding module

Output Image Sequence

EC -1

EC

-1

RLC

-1

Q

-1

IDCT

MV MCP

Fig. 7.17. MPEG-1: Decoding module The promising results obtained with object-based coding techniques for still images motivated its extension to video sequences. Objects in a video stream can be de ned as regions speci ed by color, shape, textural content

7.8 Color Video Compression

323

and motion. The methods used for motion estimation and texture coding are extensions of those used in the block-based methodologies. However, since actual objects and not at rigid blocks are tracked, the motion-compensated prediction is more exact therefore reducing the amount of information needed to encode the residual prediction error signal. MPEG-4 is a new multimedia standard which speci es coding of audio and video objects, both natural and synthetic, a multiplexed representation of many such simultaneous objects, as well as the description and dynamics of the scene containing the objects. The video portion of the MPEG-4 standard, the so-called MPEG-4 visual part, deals with the coding of natural and synthetic visual data, such as facial animation and mesh-based coding. Central to the MPEG-4 visual part is the concept of video object and its temporal instance, the so-called video object planes (VOP). A VOP can be fully described by shape and/or variations in the luminance and chrominance values. In natural images, VOPs are obtained by interactive or automatic segmentation and the resulting shape information can be represented as a binary shape mask. The segmented sequences contains a number of well de ned VOPs. Each of the VOPs are coded separately and multiplexed to form a bitstream that users can access and manipulate. The encoder sends together with video objects information about scene composition to indicate where and when VOPs of video objects are to be displayed. MPEG-4 extends the concept of I-frames, P-frames and B-frames of MPEG-1 and MPEG-2 to VOPs, therefore the standard de nes I-VOP, as well as P-VOP and B-VOP based on forward and backward prediction. The encoder used to code the video objects of the scene has three main components: (i) motion coder which uses macroblock and block motion estimation and compensation similar to that of MPEG-1 but modi ed to work with arbitrary shapes, (ii) the texture coder that uses block DCT coding adapted to work with arbitrary shapes, and (iii) shape coder that deals with shape. A rectangular bounding box enclosing the shape to be coded is formed such that its horizontal and vertical dimensions are multiples of 16 pixels. The pixels on the boundaries or inside the object are assigned a value of 255 and are considered opaque while the pixels outside the object but inside the bounding box are considered transparent and are assigned a value of 0. Coding of each (1616) block representing shape can be performed either lossy or losslessly. The degree of lossiness of coding the shape is controlled by a threshold that can take values of 0; 16; 32; :::; 256. The higher the value of the threshold, the more lossy the same representation. In addition, each shape block can be coded in intra-mode or in inter-mode. In intra-mode, no explicit prediction is performed. In inter-mode, shape information is di erenced with respect to the prediction obtained using a motion vector and the resulting error may be coded. Decoding is the inverse sequence of operations with the expection of encoder speci c functions. The object-based description of MPEG-4 allows increased interactivity and scalability both in the temporal and the spatial domain. Scalable cod-

324

7. Color Image Compression

ing o ers a means of scaling the decoder if resources are limited or vary with time. Scalable coding also allows graceful degradation of quality when bandwidth resources are limited or vary with time. Spatial scalability encoding means that the decoder can either o er the base layer or display an enchancent layer output based on problem constraints and user de ned speci cations. On the other hand, temporal scalable coding refers to a decoder that can increase temporal resolution of decoded video using enhancement VOPs in conjunction with decoded base layer VOPs. Therefore, the new standard is better suited to address variable Quality-of-Service requests and can accommodate high levels of user interaction. It is anticipated that in full development MPEG-4 will o er increased exibility in coding quality control, channel bandwidth adaptation and decoder processing resource variations.

7.9 Conclusion In this chapter many coding schemes were reviewed. To achieve a high compression ratio at a certain image quality, a combination of these techniques is used in practical systems. The choice of the appropriate method heavily depends on the application on hand. With the maturing of the area, international standards have become available. These standards include the JPEG standard, a generic scheme for compressing still color images, the MPEG suite of standards for video coding applications, and the H.261/H.263 standards for video conferencing and mobile communications. It is anticipated that these standards will be widely used in the next few years and will facilitate the development of emerging applications. The tremendous advances in both software and hardware have brought about the integration of multiple media types within a uni ed framework. This has allowed the merging of video, audio, text, and graphics with enormous possibilities for new applications. This integration is at the forefront of the convergence of the computer, telecommunications and broadcast industries. The realization of these new technologies and applications, however, demands new methods of processing visual information. Interest has shifted from pixel based models, such as pulse code modulation, to statistically dependent pixel models, such as transform coding to object-based approaches. Therefore, in view of the requirements of future applications, the future direction of image coding techniques is to further develop model-based schemes as well as perceptually motivated techniques. Visual information is an integral part of many newly emerging multimedia applications. Recent advances in the area of mobile communications and the tremendous growth of the Internet have placed even greater demands on the need for more e ective video coding schemes. However, future coding techniques must focus on providing better ways to represent, integrate and exchange visual information in addition to ecient compression methods. These e orts aim to provide the user with greater exibility for

References

325

content-based access and manipulation of multimedia data. Numerous video applications, such as portable video phones, video conferencing, multimedia databases, and video-on-demand can greatly bene t from better compression schemes and this added content-based functionality. International video coding standards, such as the H.261, and more recently the H.263, are widely used for very low bit rate applications such as those described above. These existing standards, including MPEG-1 and MPEG-2, are all based on the same framework, that is, they employ a block-based motion compensation scheme and the discrete cosine transform for intra-frame encoding. However, this block-based approach introduces blocking and motion artifacts in the reconstructed sequences. Furthermore, the existing standards deal with video exclusively at the frame level, thereby preventing the manipulation of individual objects within the bit stream. Second generation coding algorithms have focused on representing a scene in terms of objects rather than square blocks. This approach not only improves the coding eciency and alleviates the blocking artifacts, but it can also support the content-based functionalities mentioned previously by allowing interactivity and manipulation of speci c objects within the video stream. These are some of the objectives and issues addressed within the framework of the MPEG-4 and future MPEG-7 standards. High compression ratios and very good image quality, in fact perceptually lossless image quality, can be achieved by incorporating the characteristics of the human visual system into traditional image compression schemes, or using second generation techniques which are speci cally designed to account for the HVS characteristics. While these techniques are successful in addressing the current need for both eciency and image quality, the on-going development and evolution of video applications might render the current state of these techniques unsatisfactory in a few years. It has become evident that in order to keep up with the growing sophistication of multimedia applications, the focus of still image compression research should not only be on nding new or improving existing techniques, primarily second generation techniques, but also on improving our understanding of the human visual system, and re ning the existing models. Indeed, existing models are capable of accounting for only a few of the many behavioral attributes of the HVS. A perceptually motivated scheme is only as good as the perceptual model it uses. With a more general and complete perceptual model image compression techniques will be able to further eliminate visual information that is of no importance to the human visual system, thus achieving a better performance.

References 1. Raghavan, S. V., Tripathi, S. K. (1998): Networked Multimedia Systems: Concepts, Architecture and Design. Prentice Hall, Upper Sandle River, New Jersey.

326

7. Color Image Compression

2. Netravali, A. N., Haskell, B. G. (1995): Digital Pictures: Representation, Compression and Standards. 2nd edition, Plenum Press, New York, N. Y. 3. Joint Photographic Expertc Group (1998): JPEG Home Page. www:disc:org:uk=public=jpeghomepage:htm. 4. ISO/IEC, JTC1/SC29/WG1 N505 (ITU-T SG8) (1997): Coding of still images. Electronic Preprint. 5. Pennebaker, W. B., Mitchell J. L. (1993): JPEG Still Image Data Compression Standard. Van Nostrand Reinhold, New York, NY. 6. Chiarglione, L. (1997): MPEG and multimedia communications. IEEE Transactions on Circuits and Systems for Video Technology, 7:5-18. 7. Chiariglione, L. (1995): MPEG: A technological basis for multimedia applications. IEEE Multimedia, 2(1): 85-89. 8. Jayant, N., Johnston, J. D., Safranek, R. J. (1993): Signal compression based on models of the human perception. Proceedings of the IEEE, 81(10): 1385-1422. 9. Glenn, W. E. (1993): Digital image compression based on visual perception and scene properties. Society of Motion Picture and Television Engineers Journal, 392-397. 10. Tong, H. (1997): A Perceptually Adaptive JPEG Coder. M.A. Sc. Thesis, Department of Electrical and Computer Engineering, University of Toronto. 11. Gersho, A., Ramamurthi, B. (1982): Image coding using vector quantization. Proceedings of the IEEE Conference on Acoustic Speech and Signal Processing, 1:428-431. 12. Clarke, R. J. (1985): Transform Coding of Images. Academic Press, New York, N.Y. 13. Rao K. R., Yip, P. (1990): Discrete Cosine Transform: Algorithms, Advances, Applications. Academic Press, London, U.K. 14. Woods, J. W. (1991); Subband Image Coding. Kluwer, Boston, MA. 15. Shapiro, J. M. (1993): Embedded image coding using zerotrees of wavelet coecients. IEEE Transactions on Signal Processing, 41: 3445-3462. 16. Davis, G., Danskin, J., Heasman, R. (1997): Wavelet image compression construction kit. On line report. www:cs:dartmouth:edu=gdavis=wavelet=wavelet:html 17. Kunt, M., Ikonomopoulos, A., Kocher, M. (1985): Second generation image coding techniques. Proceedings of the IEEE, 73(4): 549-574. 18. Ebrahimi, T., kunt, M. (1998): Visual data compression for multimedia applications. Proceedings of the IEEE, 86(6): 1109-1125. 19. Pearson, D. (1995): Developments in model-based video coding. Proceedings of the IEEE, 83: 892-906. 20. Fisher, Y. (ed.) (1995): Fractal Image Compression: Theory and Application to Digital Images. Springer Verlag, New York, N.Y. 21. Jayant, N (1992): Signal compression: Technology targets and research directions. IEEE Journal on Selected Areas in Communications, 10:796-818. 22. Domanski, M., Bartkowiak, M. (1998): Compression. in Sangwine, S.J., Horne, R.E.N. (eds.), The Colour Image Processing Handbook, 242-304, Chapman & Hall, Cambridge, Great Britain. 23. Penney, W. (1988): Processing pictures in HSI space. The Electronic System Design Magazine, 61-66. 24. Moroney, N. M., Fairchild, M. D. (1995): Color space selection for JPEG image compression. Journal of Electronic Imaging, 4(4): 373-381. 25. Kuduvalli, G. R., Rangayyan, R. M. (1992): Performance analysis of reversible image compression techniques for high resolution digital tele radiology. IEEE Transactions on Medical Imaging, 11: 430-445.

References

327

26. Gonzales, R.C., Wood, R. E. (1992): Digital Image Processing. Addison-Wesley, Massachusetts. 27. Roger, R. E., Arnold, J. F., Reversible image compression bounded by noise. IEEE Transactions on Geoscience and Remote Sensing, 32: 19-24. 28. Provine, J. A., Rangayyan, R. M. (1994): Lossless compression of Peano scanned images. Journal of Electronic Imaging, 3(2): 176-180. 29. Witten, I. H., Mo at, A., Bell, T. C. (1994): Managing Gigabytes, Compressing and Indexing Documents and Images. Van Nostrand Reinhold. 30. Boncelet Jr., C. G., Cobbs, J. R., Moser, A. R. (1988): Error free compression of medical X-ray images. Proceedings of Visual Communications and Image Processing '88, 1001: 269-276. 31. Wallace, G. K. (1991): The JPEG still picture compression standard. Communications of ACM, 34(4): 30-44. 32. Ahmed, N., Natarajan, T., Rao, K. R. (1974): Discrete cosine transform. IEEE Transactions on Computers, 23: 90-93. 33. Bhaskaran, V., Konstantinides, K. (1995): Image and Video Compression Standards. Kluwer, Boston, MA. 34. Leger, A., Omachi, T., Wallace, C. K. (1991): JPEG still picture compression algorithm. Optical Engineering, 30: 947-954. 35. Egger, O., Li., W. (1995): Subband coding of images using symmetrical lter banks. IEEE Transactions on Image Processing, 4(4): 478-485. 36. Van Dyk, R. E., Rajala, S. A. (1994): Subband/VQ coding of color images with perceptually optimal bit allocation. IEEE Transaction on Circuits and Systems for Video Technology, 4(1): 68-82. 37. Lewis, A. S., Knowles, G. (1992): Image compression using the 2-D wavelet transform. IEEE Transactions on Image Processing, 1(2): 244-250. 38. Chen, D., Bovik, A. C. (1990): Visual pattern image coding. IEEE Transactions on Communications, 38(12): 2137-2145. 39. Barnsley, M. F. (1988): Fractals Everywhere. Academic Press, N. Y. 40. Jacquin, A. E. (1992): Image coding based on a fractal theory of iterated contractive image transformation. IEEE Transactions on Image Processing, 1: 1830. 41. Lu, G. (1993): Fractal image compression. Signal Processing: Image Communications, 4(4): 327-343. 42. Jayant, N. Johnston, J., Safranek, R. (1993): Perceptual coding of images. SPIE Proceedings, 1913: 168-178. 43. Klein, S. A., Silverstein, A. D., Carney, T. (1992): Relevance of human vision to JPEG-DCT compression. SPIE Proceedings 1666: 200-215. 44. Nill, N. B. (1985): A visual model weighted cosine transform for image compression and quality assessment. IEEE Transactions on Communications, 33: 551-557. 45. Rosenholtz, R., Watson, A. B. (1996): Perceptual adaptive JPEG coding. Proceedings, IEEE International Conference on Image Processing, I: 901-904. 46. Eom, I. K., Kim, H. S., Son, K. S., Kim, Y. S., Kim, J. H. (1995): Image coding using wavelet transform and human visual system. SPIE Proceedings, 2418: 176-183. 47. Kocher, M., Leonardi, R. (1986): Adaptive region growing technique using polynomial functions for image approximations. Signal Processing, 11(1): 47-60. 48. Mitchell, J., Pennebaker, W., Fogg, C. E., Legall, D. J. (1997): MPEG Video Compression Standard. Chapman and Hall, N.Y. 49. Fleury, P., Bhattacharjee, S., Piron, L., Ebrahimi, T., Kunt, M. (1998): MPEG4 video veri cation model: A solution for interactive multimedia applications. Journal of Electronic Imaging, 7(3): 502-515.

328

7. Color Image Compression

50. Ramos, M. G. (1998): Perceptually based scalable image coding for packet networks. Journal of Electronic Imaging, 7(3): 453-463. 51. Strang, G., Nguyen, T. (1996): Wavelets and Filter Banks. Wellesley-Cambridge Press, Wellesley, MA. 52. Chow, C. H., Li, Y. C. (1996): A perceptually tuned subband image coder based on the measure of just noticable distortion pro le. IEEE Transaction on Circuits and Systems for Video Technology, 5(6): 467-476.

8. Emerging Applications

Multimedia data processing refers to a combined processing of multiple data streams of various types. Recent advances in hardware, software and digital signal processing allow for the integration of di erent data streams which may include voice, digital video, graphics and text within a single platform. A simple example may be the simultaneous use of audio, video and closed-caption data for content-based searching and browsing of multimedia databases or the merging of vector graphics, text, and digital video. This rapid development is the driving force behind the convergence of the computing, telecommunications, broadcast, and entertainment technologies. The eld is developing rapidly and emerging multimedia applications, such as intelligent visual search engines, multimedia databases, Internet/mobile audio-visual communication, and desktop video-conferencing will all have a profound impact on modern professional life, health care, education, and entertainment. The full development and consumer acceptance of multimedia will create a host of new products and services including new business opportunities for innovative companies. However, in order for these possibilities to be realized, a number of technological problems must be considered. Some of these include, but are not limited to the following: 1. Novel methods to process multimedia signals in order to meet quality of service requirements. In the majority of multimedia applications, the devices used to capture and display information vary considerably. Data acquired by optical, electro-optical or electronic means are likely to be degraded by the sensing environment. For example, a typical photograph may have excessive lm grain noise, su er from various types of blurring, such as motion or focus blur, or have unnatural shifts in hue, saturation or brightness. Noise introduced by the recording media degrades the quality of the resulting images. It is anticipated that the use of digital processing techniques, such as ltering and signal enhancement will improve the performance of the system. 2. Ecient compression and coding of multimedia signals. In particular, visual signals with an emphasis on negotiable quality of service contracts must be considered. Rich data types such as digital images and video signals have enormous storage and bandwidth requirements. Techniques that allow images to be stored and transmitted in more compact

330

8. Emerging Applications

formats are of great importance. Multimedia applications are putting higher demands on both the achieved image quality and compression ratios. Quality is the primary consideration in applications such as DVD drives, interactive HDTV, and digital libraries. Existing techniques achieve compression ratios of 10:1 to 15:1, while maintaining reasonable image quality. However, higher compression ratios can reduce the high cost of storage and transmission, and also lead to the advent of new applications, such as future display terminals with photo quality resolution, or the simultaneous broadcast of a larger number of visual programs.

3. Innovative techniques for indexing and searching multimedia data. Multimedia information is dicult to handle both in terms of its size and the scarcity of tools available for navigation and retrieval. A key problem is the e ective representation of this data in an environment in which users from di erent backgrounds can retrieve and handle information without specialized training. Unlike alphanumeric data, multimedia information does not have any semantic structure. Thus, conventional information management systems cannot be directly used to manage multimedia data. Content-based approaches seem to be a natural choice where audio information along with visual indices of color, shape, and motion are more appropriate descriptions. A set of e ective quality measures are also necessary in order to measure the success of di erent techniques and algorithms. In each of these areas, a great deal of progress has been made in the past few years driven in part by the availability of increased computing power and the introduction of new standards for multimedia services. For example, the emergence of the MPEG-7 multimedia standard demands an increased level of intelligence that will allow the ecient processing of raw information; recognition of dominant features; extraction of objects of interest; and the interpretation and interaction of multimedia data. Thus, e ective multimedia signal processing techniques can o er promising solutions in all of the aforementioned areas. Digital video is an integral part of many newly emerging multimedia applications. Recent advances in the area of mobile communications and the tremendous growth of the Internet have placed even greater demands on the need for more e ective video coding schemes. However, future coding techniques must focus on providing better ways to represent, integrate and exchange visual information in addition to ecient compression methods. These e orts aim to provide the user with greater exibility for \content-based" access and manipulation of multimedia data. Numerous video applications such as portable videophones, video-conferencing, multimedia databases, and video-on-demand can greatly bene t from better compression schemes and this added \content-based" functionality.

8.1 Input Analysis Using Color Information

331

The next generation of coding algorithms have focused on representing a scene in terms of \objects" rather than square blocks [1],[2], [3]. This approach not only improves the coding eciency and alleviates the blocking artifacts, but it can also support the content-based functionalities mentioned previously by allowing interactivity and manipulation of speci c objects within the video stream. These are some of the objectives and issues addressed within the framework of the MPEG-4 and future MPEG-7 standards [4]. In order to obtain an object-based representation, an input video sequence must rst be segmented into an appropriate set of arbitrarily shaped regions. In a videophone-type application for example, an accurate segmentation of the facial region can serve two purposes: (i) it can allow the encoder to place more emphasis on the facial region since this area, the eyes and mouth in particular, is the focus of attention to the human visual system of an observer, and (ii) it can also be used to extract features, such as personal characteristics, facial expressions, and composition information so that higher level descriptions can be generated. In a similar fashion, the contents within a video database can be segmented into individual objects, where the following features can be supported: (i) sophisticated query and retrieval functions, (ii) advanced editing and compositing, and (iii) better compression ratios. A method to automatically locate and track a facial region of a headand-shoulders videophone type sequence using color and shape information is reviewed here. The face localization method consists of essentially two components, namely: (i) a color processing unit, and (ii) a fuzzy-based shape and color analysis module. The color processing component utilizes the distribution of skin-tones in the HSV color space to obtain an initial set of candidate regions or objects. The latter shape and color analysis module is used to correctly identify the facial regions when falsely detected objects are extracted. A number of fuzzy membership functions are devised to provide information about each object's shape, orientation, location, and average hue. An aggregation operator, similar to the one used in Chap. 3, combines these measures and correctly selects the facial area. The methodology presented here is robust with regards to di erent skin types, and various types of object or background motion within the scene. Furthermore, the algorithm can be implemented at a low computational complexity due to the binary nature of the operations performed.

8.1 Input Analysis Using Color Information The detection and automatic location of the human face is important and vital in numerous applications including human recognition for security purposes, human-computer interfaces, and more recently, for video coding, and content-based storage/retrieval in image and video databases. Several techniques based on shape and motion information have recently been proposed for the automatic location of the facial region [5],[6]. In [5] the technique is

332

8. Emerging Applications

based on tting an ellipse to a thresholded binary edge image while in [6] the approach utilizes the shape of the thresholded frame di erences. In the approach presented color is used as the primary tool in detecting and locating the facial areas in a scene with a complex or moving background. Color is a key feature used to understand and recollect the contents within a scene. It is also found to be a highly reliable attribute for image retrieval as it is generally invariant to translation, rotation, and scale changes [7]. The segmentation of a color image is the process of classifying the pixels within the image into a set of regions with a uniform color characteristic. The objective in our approach is to detect and isolate the color regions that correspond to the skin areas of the facial region. However, the shape or distribution of the regions that are formed depend on the chosen color space [8]. Therefore, the most advantageous color space must rst be selected in order to obtain the most e ective results in the segmentation process. It has been found that the skin clusters are well partitioned in the HSV (hue, saturation, value) space, and the segmentation can be performed by a simple thresholding scheme in one dimension rather than a more expensive multidimensional clustering technique. Furthermore, this color model is very intuitive in describing the color/intensity content within a scene. Analogous results have been found in the similar HSI space [9]. It was mentioned in Chap. 1 that color information is commonly represented in the widely used RGB coordinate system. This color space is hardware oriented and is suitable for acquisition or display devices but not particularly applicable in describing the perception of colors. On the other hand, the HSV color model corresponds more closely to the human perception of color. The HSV color space is conveniently represented by the hexcone model shown in Chap. 1 [10]. The hue (H) is measured by the angle around the vertical axis and has a range of values between 0 and 360 degrees beginning with red at 0. It gives a measure of the spectral composition of a color. The saturation (S) is a ratio that ranges from 0 (on the V axis), extending radially outwards to a maximum value of 1 on the triangular sides of the hexcone. This component refers to the proportion of pure light of the dominant wavelength. The value (V) also ranges between 0 and 1 and is a measure of the relative brightness. A fast algorithm [10] is used here to convert the set of RGB values to the HSV color space. Certain steps of the proposed segmentation scheme require the comparison of color features. For example, during clustering color regions are compared with one another to test for similarity. As mentioned in Sect. 6.8.1 when comparing the colors of two regions or pixels, a problem is encountered when one or both of the regions or objects have no or very little chromatic information. That is, a gray scale object can not successfully be compared to an object that has substantial chromatic information. As done in the segmentation scheme in Sect. 6.8, all the pixels in the image are classi ed as

8.1 Input Analysis Using Color Information

333

either chromatic or achromatic pixels. This is done by considering the discontinuities of the hue color channel. Classifying the pixels as either chromatic or achromatic can be considered a crude form of segmentation since the image is segmented into two groups. Although this form of segmentation does have an a ect in the face localization algorithm there is no change in the pixel colors. The chromatic/achromatic information is used, in the algorithm, as an indication of whether two colors should be considered similar. The segmentation of the skin areas within an image is most e ective when a suitable color space is selected for the task, as mentioned earlier. This is the case when the skin clusters are compact, distinct, and easy to extract from the color coordinate system. The complexity of the algorithm must also be low to facilitate real-time applications. The HSV color space was found to be the most suitable as it produced clusters that were clearly separated, allowing them to be detected and readily extracted. Three color spaces were compared during experimentation: the HSV, RGB and L a b color spaces. These three coordinate systems cover the di erent color space groups (hardware-based, perceptually uniform, and hue-oriented) and are frequently selected color models for testing the performance of many proposed color image segmentation algorithms. The RGB and L a b spaces showed ambiguity in the partitioning of the regions. Data from two di erent skin-colored regions , as well as the lip area from a di erent set of images were manually extracted and plotted in each of the aforementioned coordinate systems in order to observe the clusters formed. The results obtained from the RGB space are shown in Fig. 8.1. Skin & Lip Clusters− RGB Color Space Skin & Lip Clusters− Lab Color Space

250 40 Skin Cluster #2

200 30

150 Blue

Skin Cluster #2 b

Skin Cluster #1

100 50

Lip Cluster

20

10

Lip Cluster 0 250

0 40

200

250 150

200 150

100 0

70 60

0

Red

Fig. 8.1. Skin and Lip Clusters in the RGB color space

80

10

50 0

100 90

20

100

50 Green

Skin Cluster #1

30

a

−10

50 40

L

Fig. 8.2. Skin and Lip Clusters in the    L a b color space

In the gures above, it can be seen that the skin clusters are positioned relatively close to one another, however, the individual clusters are not compact.

334

8. Emerging Applications Skin & Lip Clusters− HSV Color Space

160 Skin Regions 140

Frequency

120

100

80

60

40

Lip Region

Fig. 8.3. Skin and Lip

20

0 −20

−10

0

10

20 30 Hue (Degrees)

40

50

60

hue Distributions in the HSV color space

Each forms a diagonal, elongated shape that makes the extraction process dicult. In Fig. 8.2, the skin and lip clusters are displayed in the L a b color space. In this case, the individual clusters are more compact but are spaced quite a distance apart. In fact, the Euclidean distance from skin cluster #1 to the lip cluster is roughly equivalent to that from skin cluster #1 to #2. Thus, the skin clusters do not have a global compactness which once again makes them dicult to isolate and extract. The L ab space is also computationally expensive due to the cube-root expressions in the transformation equations. Finally, in Fig. 8.3, the hue component of the skin and lip clusters from the HSV space are shown. The graph illustrates that the spectral composition of the skin and lip areas are distinct and compact. Skin clusters #1 and #2 are contained between the hue range of 10 and 40 while the lip region lies at a mean hue value of about 2 (i.e. close to the red hue value at 0 ). Thus, the skin clusters are well partitioned allowing the segmentation to be performed by a thresholding scheme in the hue axis rather than a more expensive multidimensional clustering technique. The HSV model is also advantageous in that the mean hue of the skin values can give us an indication of the skin tone of the facial region in the image. Average hue values closer towards 0 contain a greater amount of reddish spectral composition while those towards 60 contain greater yellowish spectral content. This can be useful for content-based storage and retrieval for MPEG-4 and -7 applications as well as multimedia databases. On the contrary, central cluster values in the other coordinate systems, (i.e. [Rc Gc Bc]T or [Lc ac bc ]T ) do not provide the same meaningful description to a human observer. Having de ned the selected HSV color space, a technique to determine and extract the color clusters that correspond to the facial skin regions must

8.1 Input Analysis Using Color Information

335

be devised. This requires an understanding of where these clusters form in the space just outlined in the previous section. The identi cation and tracking of the facial region is determined by utilizing the a priori knowledge of the skin-tone distributions in the HSV color space outlined above. It has been found that skin-colored clusters form within a rather well de ned region in chromaticity space [11], and also within the HSV hexcone model [12], for a variety of di erent skin types. In the HSV space in particular, the skin distribution was found to lie predominantly within the limited hue range between 0 ;50 (Red-Yellow), and in certain cases within 340;360 (Magenta-Red) for darker skin types [13]. The saturation component suggests that skin colors are somewhat saturated, but not deeply saturated, with varying levels of intensity. The hue component is the most signi cant feature in de ning the characteristics of the skin clusters. However, the hue can be unreliable when: 1) the level of brightness (e.g. value) in the scene is low, or 2) the regions under consideration have low saturation values. The rst condition can occur in areas of the image where there are shadows, or generally, under low lighting levels. In the second case, low values of saturation are found in the achromatic regions of a scene. Thus, appropriate thresholds must be de ned for the value, and saturation components where the hue attribute is reliable. The following polyhedron that corresponds to skin colored clusters has been de ned with well de ned saturation and value components, based on a large sample set [13]:

Thue1 = 340  H  Thue2 = 360

(8.1)

Thue3 = 0  H  Thue4 = 50

(8.2)

S  Tsat1 = 20%

(8.3)

V  Tval = 35% (8.4) The extent of the above hue range is purposely designed to be quite wide so that a variety of di erent skin-types can be modeled. As a result of this, however, other objects in the scene with skin-like colors may also be extracted. Nevertheless, these objects can be separated by analyzing the hue histogram of the extracted pixels. The valleys between the peaks are used to identify the various objects that possess di erent hue ranges (e.g. facial region and di erent colored objects). scale-space ltering [14] is used to smoothen the histogram and obtain the meaningful peaks and valleys. This process is carried out by convolving the original hue histogram, fh (x) , with a Gaussian function, g(x;  ) of zero mean and standard deviation  as follows:

336

8. Emerging Applications

Fh (x;  ) = fh(x)  g(x;  ) Fh (x;  ) =

Z1

fh(u) p 1 exp[ ;(x2;2 u) ] du 2  ;1 2

(8.5) (8.6)

where Fh (x;  ) represents the smooth histogram. The peaks and valleys are determined by examining the rst and second derivatives of Fh above. In the remote case that another object matches the skin color of the facial area (i.e. separation is not possible by the scale-space lter), then the shape analysis module that follows provides the necessary discriminatory functionality. A series of post-processing operations which include median ltering, and region lling/removal are subsequently used to re ne the regions obtained from the initial extraction stage. Median ltering is the rst of two post-processing operations that are performed after the initial color extraction stage. The median operation is introduced in order to smoothen the segmented object silhouettes and also eliminate any isolated misclassi ed pixels that may appear as impulsive-type noise. Square lter windows of size (55) and (77) provide a good balance between adequate noise suppression, and sucient detail preservation. This operation is computationally inexpensive since it is carried out on the bi-level images, e.g. object silhouettes. The result of the median operation is successful in removing any misclassi ed noise-like pixels, however, small isolated regions and small holes within object areas may still remain after this step. Thus, the application of median ltering by region lling and removal is followed. This second post-processing operation lls in small holes within objects which may occur due to color differences, e.g. eyes and mouth of the facial skin region, extreme shadows, or any unusual lighting e ects (specular re ection). At the same time, any erroneous small regions are also eliminated as candidate object areas. It has been found that the hue attribute is reliable when the saturation component is greater than 20% and meaningless when it is less than 10% [13]. Similar results have also been con rmed in the cylindrical L u v color model [15]. Saturation values between 0% and 10% correspond to the achromatic areas within a scene while those greater than 20% to the chromatic ones. The range between 10% and 20% represents a sort of transition region from the achromatic to the chromatic areas. It has been observed, that in certain cases, the addition of a select number of pixels within this 10-20% range can improve the results of the initial extraction process. In particular, the initial segmentation may not capture smaller areas of the face when the saturation component is decreased due to the lighting conditions. Thus, pixels within this transition region are selected accordingly [13], and merged with the initially extracted objects. A pixel within the transitional region is added to a particular object if its distance is within a threshold of the closest object. A reasonable selection can be made if the threshold is set to a factor between 1.0-1.5 of the distance from the centroid of the object to its most

8.2 Shape and Color Analysis

337

distant point. The results from this step are once again re ned by the two post-processing operations described earlier. At this point, one or more of the extracted objects correspond to the facial regions. In certain video sequences however, gaps or holes have been found around the eyes of the segmented facial area. This occurs in sequences where the forehead is covered by hair and as a result, the eyes fail to be included in the segmentation. Two morphological operators are utilized to overcome this problem and at the same time smoothen the facial contours. A morphological closing operation is rst used to ll in small holes and gaps, followed by a morphological opening operation which is used to remove small spurs and thin channels [16]. Both of these operations maintain the original shapes and sizes of the objects. A compact structuring element, such as a circle or square without holes can be used to implement these operations and also help to smoothen the object contours. Furthermore, these binary morphological operations can be implemented by low complexity hit or miss transformations [16]. The morphological stage is the nal step involved prior to any analysis of the extracted objects. The results at this point contain one or more objects that correspond to the facial areas within the scene. The block diagram in Fig. 8.4 summarizes the proposed face localization procedure. The shape and color analysis unit, described next, provides the mechanism to correctly identify the facial regions. Input image or video sequence

Initial Color Extraction

Post-processing

Addition of low Saturation components

Post-processing & Morphological

Shape & Color Analysis

Facial regions

Fig. 8.4. Overall scheme to extract the facial regions within a scene

8.2 Shape and Color Analysis The input to the shape and color analysis module may contain objects other than the facial areas. Thus, the function of this module is to identify the actual facial regions from the set of candidate objects. In order to achieve this, a number of expected facial characteristics such as shape, color, symmetry, and location are used in the selection process. Fuzzy membership functions are constructed in order to quantify the expected values of each characteristic. Thus, the value of a particular membership function gives an indication of the `goodness of t' of the object under consideration with the corresponding

338

8. Emerging Applications

feature. An overall `goodness of t' value can nally be derived for each object by combining the measures obtained from the individual primitives. For the segmentation and localization scheme a set of features that are suitable for our application purposes are utilized. In facial image databases, such as employees databases or videophone-type sequences, such as video archives of newscasts and interviews, the scene consists of predominantly upright faces which are contained within the image. Thus, features such as the location of the face, its orientation from the vertical axis, and its aspect ratio can be utilized to assist with the recognition task. These features can be determined in a simple and fast manner as opposed to measurements based on facial features, such as the eyes, nose, and mouth which may be dicult to compute due to the fact that these features may be small or occluded in certain images. More speci cally, the following four primitives are considered in the face localization system [17], [18]: 1. Deviation from the average hue value of the di erent skin-type categories. The average hue value for di erent skin-types varies amongst humans and depends on the race, gender, and the age of the person. However, the average hue of di erent skin-types falls within a more restricted range than the wider one de ned by the HSV model [13]. The deviation of an object's expected hue value from this restricted range gives an indication of its similarity to skin-tone colors. 2. Face aspect ratio. Given the geometry and the shape of the human face, it is reasonable to expect that the ratio of height to width falls within a speci c range. If the dimensions of a segmented object t the commonly accepted dimensions of the human face then it can be classi ed as a facial area. 3. Vertical orientation. The location of an object in a scene depends largely on the viewing angle of the camera, and the acquisition devices. For the intended applications it is assumed that only reasonable rotations of the head are allowed in the image plane. This corresponds to a small deviation of the facial symmetry axis from the vertical direction. 4. Relative position of the facial region in the image plane. By similar reasoning to (3) above, it is more probable that the face will not be located right at the edges of the image but more likely within a central window of the image.

8.2.1 Fuzzy Membership Functions A number of membership function models can be constructed and empirically evaluated. A trapezoidal function model is utilized here for each primitive in order to keep the complexity of the overall scheme to a minimum. This type of membership function attains the maximum value only over a limited range of input values. Symmetric or asymmetrical trapezoidal shapes can be obtained depending on the selected parameter values. As in Chap. 3, the membership

8.2 Shape and Color Analysis

339

function can assume any value in the interval [0; 1] , including both of the extreme values. A value of 0 in the function above indicates that the event is impossible. On the contrary, the maximum membership value of 1 represents total certainty. The intermediate values are used to quantify variable degrees of uncertainty. The estimates for the four membership functions are obtained by a collection of physical measurements of each primitive from a database of facial images and sequences [13]. The hue characteristics of the facial region (for di erent skin-type categories) were used to form the rst membership function. This function is built using the discrete universe of discourse [;20; 50] (e. g. ;20 = 340). The lower bound of the average hue observed in the image database is approximately 8 (African-American distribution) while the upper bound average value is around 30 (Asian distribution) [13]. A range is formed using these values, where an object is accepted as a skin-tone color with probability 1 if its average hue value falls within these bounds. Thus, the membership function associated with the rst primitive is de ned as follows: 8 (x+20) if ;20x8 < 28 if 8 x30 (x) = : 1 (8.7) (50;x)   if 30 x50 20 Experimentation with a wide variety of facial images has led to the conclusion that the aspect ratio (height/width) of the human face has a nominal value of approximately 1:5 . This nding con rms previous results reported in the open literature [9]. However, in certain images compensation for the inclusion of the neck area which has similar skin-tone characteristics to the facial region must also be considered. This has the e ect of slightly increasing the aspect ratio. Using this information along with the observed aspect ratios from the database, the parameters of the trapezoidal function for this second primitive can be tuned. The nal form of the function is given by: 8 (x;0:75) if 0:75x1:25 > < 1 0:5 if 1:25x1:75 (8.8) (x) = (2:25;x) if 1:75x2:25 > : 0 0:5 otherwise The vertical orientation of the face in the image is the third primitive used in the shape recognition system. As mentioned previously, the orientation of the facial area (i.e. deviation of the facial symmetry axis from the vertical axis) is more likely to be aligned towards the vertical due to the type of applications considered. A reasonable threshold selection of 30 can be made for valid head rotations also observed within our database. Thus, a membership value of 1 is returned if the orientation angle is less than this threshold. The membership function for this primitive is de ned as follows: 1 if 0x30 (x) = (90;x) (8.9) if 30x90 60

340

8. Emerging Applications

The last primitive used in the knowledge-based system refers to the relative position of the face in the image. Due to the nature of the applications considered, a smaller weighting is assigned to objects that appear closer to the edges and corners of the images. For this purpose, two membership functions are constructed. The rst one returns a con dence value for the location of the segmented object with respect to the X -axis. Similarly, the second one quanti es our knowledge about the location of the object with respect to the Y -axis. The following membership function has been de ned for the position of a candidate object with respect to either the X or Y -axis: 8 (x;(d)) if dx 32d > d > < if 32d x 52d (x) = > 1((3d);x) (8.10) if 52d x3d d > :0 otherwise The membership function for the X -axis is determined by letting d = D4x , where Dx represents the horizontal dimensions of the image (i.e. in the X direction). In a similar way, the Y -axis membership function is found by letting d = D4y , where Dy represents the vertical dimensions of the image (e. g. in the Y -direction). 2

2

8.2.2 Aggregation Operators The individual membership functions expressed above must be appropriately combined to form an overall decision. A number of fuzzy operators can be used to combine or fuse together the various sources of information. Conjunctive type of operators weigh the criterion with the smallest membership value more heavily while disjunctive ones assign the most weight to the criterion with the largest membership value. Here, a compensative operator which offers a compromise between conjunctive and disjunctive behavior is utilized. This type of operator is de ned as the weighted mean of a logical AND and a logical OR operator: K \ [ A B = (A B )1;  (A B ) (8.11) where A, and B are sets de ned on the same space and represented by their membership functions [19]. If the product of membership functions is utilized to determine the intersection (logical AND) and the possibilistic sum for the union (logical OR), then the form of the operator becomes as follows [19]:

c =

m Y

j =1

j

(1; )

(1 ;

m Y

j =1

(1 ; j ))

(8.12)

where c is the overall membership function which combines all the knowledge primitives for a particular object, and j is the j th elemental membership value associated with the j th primitive. The weighting parameter is

8.2 Shape and Color Analysis

341

interpreted as the grade of compensation taking values in the range of [0; 1] [19]. The product and the possibilistic sum however, are not the only operators that may be used [20]. A simple and useful t-norm function is the min operator while the corresponding one for the t-conorm is the max operator. These operators were selected to model the compensative operator, which assumes the form a weighted product as follows: m

m c = ((min  )(max  )) j =1 j j =1 j

0:5

(8.13)

where the grade of compensation = 0:5 provides a good compromise of conjunctive and disjunctive behavior [20]. The aggregation operator de ned in (8.13) is used to form the nal decision based on the designed primitives. Multimedia databases are comprised of a number of di erent media types, such as images and video that are binary by nature, and hence unstructured. An appropriate set of interpretations must be derived for these media objects in order to allow for content-based functionalities which include storage and retrieval. These interpretations, or 'metadata' are generated by applying a set of feature extracting functions on the contained media objects [21]. These functions are media dependent and are unique even within each media type. The following four steps are necessary in extracting the features from image object types: (i) object locator design, (ii) feature selection, (iii) classi er design, and (iv) classi er training. The function of the object locator is to isolate the individual objects of interest within the image through a suitable segmentation algorithm. In the second step, speci c features are selected to identify the di erent types of objects that might occur within the images of interest. The classi er design stage is then used to establish a mathematical basis for distinguishing the di erent objects based on the designed features. Finally, the last step is used to train and update the classi er module by adjusting various parameters. In the previous section the object locator to automatically isolate and track the facial area within a facial image database or a videophone-type sequence was described. Now, the use of a set of features that may be used in constructing a metadata feature vector for the classi er design and training stages is proposed. Having determined the facial regions within the image, an n-dimensional feature vector, f = (f1 ; f2 ; : : : ; fn) , that may be used for content-based storage and retrieval purposes can be constructed. Several features that may be incorporated within a more detailed metadata feature vector are presented here. More speci cally, the use of hair and skin color, and face location and size are proposed as a preliminary set. Hair color is a signi cant human characteristic that can be e ectively employed in user queries to retrieve particular facial images. A scheme to categorize black, gray/white, brown, and blonde hair colors within the HSV space has been determined. First, the H , S , and V component histograms of the hair regions are formed and smoothened using the scale-space lter

342

8. Emerging Applications

de ned earlier. The peak values from each histogram are subsequently determined and used to form the appropriate classi cation. The following regions were suitably found from the large sample set for the various categories of hair color: 1. Black Vp < 15% 2. Gray Sp < 20% \ Vp > 50% 3. Brown Sp  20% \ 15  Vp < !40% 4. Blonde 20