A WAVELET-BASED WATERMARKING METHOD

0 downloads 0 Views 2MB Size Report
6.1 Conversion of true color input image into gray scale image. 37. (ii). 6.2 Discrete ... 7.1Matlab code for Watermark Insertion and Extraction. 48. 7.2Matlab code ...
A WAVELET-BASED WATERMARKING METHOD EXPLOITING THE CONTRAST SENSITIVITY FUNCTION B.Tech.Project Report By

Ch. HemanthAnvesh I. Raj Kumar M. Sachin Kumar G. Thirupathi

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING GOKARAJU RANGARAJU INSTITUTE OF ENGINEERINGAND TECHNOLOGY (Affiliated toJawaharlalNehruTechnologicalUniversity)

HYDERABAD 500 090 2013

A WAVELET-BASED WATERMARKING METHOD EXPLOITING THE CONTRAST SENSITIVITY FUNCTION ProjectReport SubmittedinPartialFulfillmentoftheRequirements fortheDegreeof

Bachelor of Technology in Electronics andCommunication Engineering by Ch.HemanthAnvesh (RolNo.09241A0475) I. Raj Kumar (Roll No. 09241A0492) M. Sachin Kumar(Roll No. 09241A0499) G. Thirupathi (Roll No.09241A04B7)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING GOKARAJU RANGARAJU INSTITUTE OF ENGINEERINGAND TECHNOLOGY (Affiliated toJawaharlalNehruTechnologicalUniversity)HYDERABAD 500

090 2013 Departmentof Electronicsand Communication Engineering GokarajuRangaraju Institute of Engineering and Technology (Affiliated to JawaharlalNehru Technological University)

Hyderabad 500 090 2013

Certificate Thisistocertifythatthisprojectreportentitled A Wavelet Based Watermarking Exploiting Contrast Sensitivity Function by Ch. HemanthAnvesh (09241A0475), I. Raj Kumar (09241A0492), M. Sachin Kumar (09241A0499), and G. Thirupathi (09241A04B7) submitted in partial fulfillmentoftherequirementsforthedegreeofBachelorofTechnologyinElectronicsand CommunicationEngineeringoftheJawaharlalNehruTechnologicalUniversity,Hyderabad, during theacademic year 2009-2013, is a bonfire recordofwork carried out under our guidance and supervision. Theresults embodiedinthisreporthavenotbeen submittedto anyother Universityor Institutionfor the award ofany degree ordiploma.

(Guide) M.Suneetha (Assistant Professor)

(External Examiner)

(Head of Department) Dr. Ravi Billa

ACKNOWLEDGMENT

ItisapleasuretoexpressthankstoMs. M. Suneetha forthe encouragementand guidance throughout the course of this project. We are grateful to Ms. K.Meenaskshi for her even willingness to give us valuable advice and direction; whenever we approached her with a problem. We are thankful to her for providing immense guidance for this project.

It is my privilege to thank all Project Review Committee members for allowing me to do this project and providing me all the facilities to do my project.

I thank to both teaching and non-teaching staff members of ECE Department for their kind cooperation and all sorts of help to bring out this project work successfully

Ch. HemanthAnvesh

I. Raj Kumar

M. Sachin Kumar

G. Thirupathi

ABSTRACT The efficiency of an image watermarking technique depends on the preservation of visually significant information. This is attained by embedding the watermark transparently with possible strengths. To certify efficient copyright protection, the watermarking scheme should own the characteristics, such as robustness and imperceptibility. Integration of Human Visual System (HVS) models with in the watermarking scheme helps to attain an effective copyright protection.Currently, wavelet domain based watermarking scheme mainly interested in watermarking researches. The sensitivity of a human observer to contrast with respect to spatial frequency is described by the Contrast Sensitivity Function (CSF). The strength of watermark within the decomposition sub bands, which occupy an interval on spatial frequencies, is adjusted according to this sensitivity. Moreover, the watermark embedding process is carried over the sub band coefficients that lie on the edges where distortions are less noticeable.

Keywords: Image Watermarking, DWT, Edge Detector, Contrast Sensitivity Function.

(i)

List of Figures

1.1 Block Diagram

3

3.1 Demonstration of (a) a wave (b) a wavelet

12

3.2 Wavelet families (a) Haar (b) Daubechies4 (c) Coiflet1 (d) Symlet2

14

(e) Meyer (f) Morlet (g) Mexican Hat. 3.3 Decomposition of a input image using Downsampling

17

3.4 Reconstruction of an image using Upsampling

18

3.5 DWT image decomposition and reconstruction

19

3.6 Different sub bands after 1st decomposition level

19

3.7 Different sub-bands after second and third decomposition level

20

3.8 Layouts of Original image and DWT image.

20

4.1: Impact of noise on edge detection

22

4.2 Edge patterns for Roberts edge detector: (a) s; (b) t

24

4.3 Edge patterns for Prewitt edge detectors: (a) s; (b) t

25

4.4 Edge patterns for Sobel edge detectors: (a)s; (b)t

25

4.5 Lena image with sobel edge detector

26

5.1: Luminance Contrast Sensitivity Function

29

5.2 Relation between the CSF and a 5-level Mallat 2D wavelet decomposition

30

5.3 Typical shape of CSF for luminance and chrominance channels

32

5.4: DWT CSF mask with 11 unique weights

34

6.1 Conversion of true color input image into gray scale image

37

(ii) 6.2 Discrete Wavelet Transformed image of Original image

37

6.3 Conversion of true colored logo image into gray scale image

38

6.4 logo and its reshape

38

6.5 Significant pixels in one of the sub band (a) DWT sub band

39

(b) significant pixel in the sub band. 6.6 Watermarked image sub bands.

40

6.7 watermarking (a) Watermarked DWT sub bands (b) Watermarked image

41

(c) Original image 6.8 watermarked image and sub bands

42

6.9 Watermark extraction (a) Watermarked image sub bands

43

(b)Original image sub bands 6.10 Individual extracted logo pixels and extracted logo

43

(iii)

List of Tables

6.1 PSNR values for Different images

45

6.2 Correlation between inserted logo and extracted logo for different images

46

(iv)

List of Abbreviations

HVS

Human Visual System

CSF

Contrast Sensitivity Function

DWT

Discrete Wavelet Transform

DCT

Discrete Cosine Transform

CWT

Continuous Wavelet Transform

FIR

Finite Impulse Response

MSE

Mean Square Error

PSNR

Peak Signal to Noise Ratio

WMSE

Weighted Mean Square Error

WPSNR

Weighted Peak Signal to Noise Ratio

IDWT

Inverse Discrete Wavelet Transform

(v)

CONTENTS

Abstract

i

List of Figures List of Tables

ii iv

List of Abbreviations

v

1. Introduction

1

1.1Background

1

1.2 Motivation

2

1.3 Project Goals

2

1.4 Methodology

3

1.5 Thesis Organization

3

2. Digital Watermarking

5

2.1 Introduction

5

2.2 Classification of Digital Watermarking

5

2.2.1 Visibility

5

2.2.2 Robustness

6

2.2.3 Perceptibility

6

2.2.4 Capacity

7

2.2.5 Embedding Method

7

2.3 Techniques of Digital Watermarking

8

2.3.1 Spatial Domain Watermarking

8

2.3.2 Transform Domain Watermarking

8

2.4 Application of Digital Watermarking

9

2.5 Summary

11

3. Wavelets

12

3.1 Wavelet Introduction

12

3.2 Why use Wavelet?

13

3.2.1 Wavelet Families

13

3.3 Continuous Wavelet Transform

15

3.4 Discrete Wavelet Transform

16

3.5 DWT using Filter Banks

17

3.6 Summary

20

4. Edge Detector 4.1 Definitions

21 21

4.1.1 Introduction of classical edge detectors

21

4.1.2 Noise and its influence on edge detection

21

4.1.3 Edge detector with pre-smoothing

22

4.1.4 Edge detector using wavelets

23

4.2 Review of Classical Edge Detectors

23

4.3 Roberts edge detector

24

4.4 Prewitt edge detector

25

4.5 Sobel edge detector

25

4.6 Summary

26

5. Human Visual System

28

5.1 Background

28

5.2Mapping the CSF on a Mallat wavelet decomposition

29

5.3 Assumptions about the final viewing conditions

31

5.4 CSF Weights

33

5.5 Summary

35

6. PROPOSED WORK AND RESULTS

36

6.1 Introduction

36

6.2 Watermark Embedding

36

6.3 Watermark Extraction

42

6.4 RESULTS

44

6.4.1 MSE

44

6.4.2 PSNR

44

6.4.3 Correlation coefficient

45

6.4.4Watermark is present or not?

46

6.4.5 WMSE& WPSNR

47

6.5 Conclusion and Future Work

7. Source Code

47

49

7.1Matlab code for Watermark Insertion and Extraction

48

7.2Matlab code for CSF weights

56

REFERENCES

57

Chapter 1 INTRODUCTION

1.1 Background

Development of compression algorithms for multimedia data such as MPEG-2/4 and JPEG standards, and increase in the network data transmission speed have allowed widespread use of applications, which rely on digital data. In other words, digital multimedia data are rapidly spreading everywhere. On the other hand, this situation has brought about the possibility of duplicating and/or manipulating the data. To keep on with the transmission of data over the Internet the reliability and originality of the transmitted data should be verifiable. It is necessary that multimedia data should be protected and secured. One way to address this problem involves embedding an invisible data into the originaldata to mark ownership of them. There are many techniques for information hiding, which can be divided into different categories such as convert channels, steganography, anonymity, and watermarking. Convert channels techniques were defined in the context of multilevel secure systems. Convert channels usually handle properties of the communication channels in an unexpected and unforeseen way in order to transfer data through the medium without detection by anyone other than the entities operating the covert channel. Steganography is about preventing the detection of an encrypted data, which has been protected by cryptography algorithms. Anonymity is a technique to find ways to hide the data content of transmitted messages such as sender and the recipients. Digital watermarking has an extra requirement of robustness compared to steganography algorithms against possible attacks. It should be also noted that watermarking isnot intended for protecting of the content of a message, and hence it is different from cryptography. In this thesis we focus on the robustness of the digital watermarking algorithms in the transform domain against common attacks.

1

The rapid evolution of multimedia systems and the wide distribution of digital data over the World Wide Web address the copyright protection of digital information. The aim is to embed copyright information, which is called watermark, on digital data (audio or visual) in order to protect ownership. In general, a digital watermarking technique must satisfy two requirements. First, the watermark should be transparent or perceptually invisible for image data. The second requirement is that the watermark should be resistant to attacks that may remove it or replace it with another watermark. This implies that the watermark should be robust to common signal processing operations, such as compression, filtering, enhancements, rotation, cropping and translation.

1.2 Motivation

There are different algorithms in the spatial and transform domains for digital watermarking.The techniques in the spatial domain still have relatively low-bit capacity andare not resistant enough to lossy image compression and other image processing. Forinstance, a simple noise in the image may eliminate the watermark data. On the otherhand, frequency domain-based techniques can embed more bits for watermark and aremore robust to attack. Some transforms such as Discrete Cosine Transform (DCT) andDiscrete Wavelet Transform (DWT) are used for watermarking in the frequency domain. In fact the robustness of the algorithms is dependent on the frequency at whichthe watermark data is added. We have performed evaluation by having in mind that the embedded watermark should be invisible so we have kept the Peak Signal to Noise Ratio (PSNR) value of the images constant at 35dB and compared the robustness of the different methods.

1.3 Project Goals Not all watermarking methods are suitable for a particular application. The suitability of a method depends on the type of data, the processing or transformations applied on data, the lossy or lossless type of compressions used, the key type used, etc. An objective and quantitative 2

measurement on the suitability of each method for a given application is of great importance and usefulness.

The project work will include the following steps: We study the main DCT and DWT watermark algorithms. The transform watermark algorithms are more robust than spatial domain.We implement several important transform watermark algorithms and we test their robustness using different attacks.

1.4Methodology The method we are used for watermarking and extraction is given in a block diagram

logo

Xi.j

X’u, v

Xu, v Edge

DWT

Y’u, v Watermark insertion

detector

Yi.j IDWT

Wϴ1 CSF weighting

Fig 1.1 Block Diagram

1.5 Thesis Organization The thesis is organized as follows. In Chapter 2, we discuss the basic information about digital watermarking and different requirements that a digital watermarking algorithm should

3

consider. In addition, several applications of digital watermarking, a classification of different watermarking algorithms, and some hardware solutions are presented. In Chapter 3, we discuss transform domain watermarking, specially focusing on DWT in more detail. We explain the concept of DWT and different ways to implement it. Different watermarking algorithms based on DWT are discussed. In Chapter 4, we discuss about edge detector concepts especially more detail about sobel edge detector. In Chapter 5, we discuss Human Visual System and in detail about CSF weights. We present the proposed work and Results in Chapter 6.

4

Chapter 2 DIGITAL WATERMARKING

2.1 Introduction

A digital watermark is a form of steganography because it hides the embedded data, often without the knowledge of the viewer or user. A digital watermark is “a digital code robustly and imperceptibly embedded in the host data and typically contains information about origin, status, and/or destination of the data”, A watermark is a secret imperceptible signal, is embedded into the original data in such a way that it remains present as long as the perceptible quality of the content is at an acceptable level. The owner of the original data proves his/her ownership by extracting the watermark from the watermarked content in case of multiple ownership claims. Digital watermark may be comprised of copyright or authentication codes, or a legend essential for signal interpretation. Common types of signals to watermark are still images, audio, and digital video.

2.2 Classification of Digital Watermarking The Digital Watermarking techniques can be classified into several ways.



Visibility



Robustness



Capacity



Perceptibility



Embedding Method

2.2.1 Visibility Based on the Visibility Digital Watermarking can be classified into

5

Visible: The Digital Watermark is visible if the embedded text, logo, which identifies the owner of the media.

Invisible: The Digital Watermark is invisible if the embedded text or logo is inserted in the source image.

2.2.2 Robustness

Based on Robustness Digital Watermarking can be classified into Fragile: A digital watermark is called fragile if it fails to be detectable after the slightest modification. Fragile watermarks are commonly used for tamper detection (integrity proof). Modifications to an original work that clearly are noticeable commonly are not referred to as watermarks. Semi Fragile: A digital watermark is called semi-fragile if it resists benign transformations, but fails detection after malignant transformations. Semi-fragile watermarks commonly are used to detect malignant transformations. Robust: A digital watermark is called robust if it resists a designated class of transformations. Robust watermarks may be used in copy protection applications to carry copy and no access control information.

2.2.3Perceptibility Based on Perceptibility Digital Watermarking can be classified into Imperceptible: A digital watermark is called imperceptible if the original cover signal and the marked signal are perceptually indistinguishable.

6

Perceptible: A digital watermark is called perceptible if its presence in the marked signal is noticeable. 2.2.4 Capacity

The length of the embedded message determines two different main classes of digital watermarking schemes. Zero-bit long: It detects the presence or absence of the watermark. A 1 denotes the presence. 0 denotes the absence. N-bit long: The message is a n-bit-long stream and is modulated in the watermark. These kinds of schemes usually are referred to as multiple-bit watermarking or non-zero-bit watermarking schemes.

2.2.5 Embedding Method Based on the Embedding method Digital Watermarking can be classified into Spread Spectrum: A digital watermarking method is referred to as spectrum. If the marked signal is obtained by an additive modification. Spread-spectrum watermarks are known to be modestly robust, but also to have a low information capacity due to host interference. Quantization type: A digital watermarking method is said to be of quantization type if the marked signal is obtained by quantization. Quantization watermarks suffer from low robustness, but have a high information capacity due to rejection of host interference.

7

Amplitude Modulation: A digital watermarking method is referred to as amplitudemodulationif the marked signal is embedded by additive modification which is similar to spread spectrum method, but is particularly embedded in the spatial domain.

2.3 Techniques of Digital Watermarking 2.3.1 Spatial Domain Watermarking

Spatial domain watermark algorithms insert watermark data directly into pixels of an image. For example, some algorithm insert pseudo-random noise to image pixels other techniques modify the Least Significant Bit (LSB) of the image pixels. The invisibility of the watermark data is obtained on the assumption that the LSB bits are visually insignificant. There are two ways of doing an LSB modification. There are some methods to change the LSB bits. The LSB of each pixel can be replaced with the secret message or image pixels may be chosen randomly according to a secret key. Here is an example of modifying the LSBs, suppose we have three R, G, and B component in an image. Their value for a chosen pixel is green (R, G, B) = (0, 255, 0). If a watermark algorithm wants to hide the bit value 1 in R component then the new pixel value has components (R,G,B) = (1, 255, 0). As this modification is so small, the new image is to the human eye indistinguishable from the original one. Although these spatial domain techniques can be easily used on almost every image, they have the following drawbacks. These techniques are highly sensitive to signal processing operations and can be easily damaged. For example, lossy compression could completely defeat the watermark. In other words, words, watermarking in the spatial domain is easy to destroy using some attacks such as low-pass filtering. As a result, transform domain watermarking algorithms are used.

2.3.2 Transform Domain Watermarking

Transform domain watermarking embeds watermark data into the transformed image. Transform domain algorithms have many advantages over spatial domain algorithms. Common 8

signal processing includes operations such as upsampling, downsampling, quantization, and requantization. Rotation, translation, and scaling are common geometric operations. Lossy operation is an operation to remove some unimportant parts of the data. Most of the processing for this category takes place in the transform domain and eliminates high-frequency values. Those operations corrupt the watermark data, which has been embedded into the original data. In addition, the techniques in the spatial domain still have relatively low-bit capacity.

2.4 Applications of Digital Watermarking

Although the main motivation behind the digital watermarking is the copyright protection, itsapplications are not that restricted. There is a wide application area of digital watermarking,including broadcast monitoring, fingerprinting, authentication and covet communication. For secure applications a watermark is used for following purposes: 1. Copyright Protection: For the protection of intellectual property, the data owner can embed a watermark representing copyright information in his data. This watermark can prove his ownership in court when someone has infringed on his copyrights. 2. Fingerprinting: To trace the source of illegal copies, the owner can use a fingerprinting technique. In this case, the owner can embed different watermarks in the copies of the data that are supplied to different customers. Fingerprinting can be compared to embedding a serial number that is related to the customer’s identity in the data. It enables the intellectual property owner to identify customers who have broken their license agreement by supplying the data to third parties. 3. Broadcast Monitoring: By embedding watermarks in commercial advertisements, an automated monitoring system can verify whether advertisements are broadcasted as contracted. Not only commercials but also valuable TV products can be protected by broadcast monitoring. The same process can

9

also be used for video and sound clips. Musicians and actors may request to ensure that they receive accurate royalties for broadcasts of their performances. 4. Data Authentication: The authentication is the detection of whether the content of the digital content has changed. As a solution, a fragile watermark embedded to the digital content indicates whether the data has been altered. If any tampering has occurred in the content, the same change will also occur on the watermark. It can also provide information about the part of the content that has been altered 5. Copy Protection: The information stored in a watermark can directly control digital recording devices for copy protection purposes. In this case, the watermark represents a copy-prohibit bit and watermark detectors in the recorder determine whether the data offered to the recorder may be stored or not. 6. Covert Communication: The watermark, secret message, can be embedded imperceptibly to the digital image or video to communicate information from the sender to the intended receiver while maintaining low probability of intercept by other unintended receivers. For non-secure applications a watermark is used for following purposes: 1. Indexing: Indexing of video mail, where comments can be embedded in the video content; indexing of movies and news items, where markers and comments can be inserted that can be used by search engines. 2. Medical Safety: Embedding the date and the patient’s name in medical images could be a useful safety measure.

3. Data Hiding:

10

Watermarking techniques can be used for the transmission of secrete private messages. Since various governments restrict the use of encryption services, people may hide their messages in other data.

Summary The growth of the networked multimedia systems has created the need for the copyright protection of various digital medium. Example, images, video clips, audio clips etc. copyright protection involves the authentication of ownership and the identification of illegal copies of a image. One approach used to address this problem is to add a visible or invisible structure to an image that can be used to seal or mark it. These structures are known as digital watermarks. The watermark is capable of carrying such information as authentication or authorization codes, or a legend essential for image interpretation. This capabilityisenvisaged to find application in image tagging, copyright enforcement, counterfeit protection, and controlled access. In this thesis, we first outline the desirable characteristics of digital watermarks. Previous work in digital watermarking is then summarized. Several recent approaches that address these issues are also discussed. In this chapter described, introduction about watermarking, classification, techniques of digital water marking and applications.

11

Chapter 3 WAVELETS

3.1 Wavelet Introduction: A wave is a periodic oscillating disturbance that propagates through space and time, usually with transference of energy. In contrast, wavelets are localized waves that have their energy concentrated in time or space and are suited to analysis of transient signals. A wavelet is a localized change of a sound signal in 1-D or localized variations of detail in an image in 2D.Wavelets can be used for a wide variety of signal processing tasks such as compression, detecting edges, removing noise, and enhancing sound or images. Wavelets are mathematical functions that divide continuous time signal data into different frequency components. If parts of a signal are rapidly changing, they are of a high frequency, and on the other hand, slowly smoothly changing pieces are of low frequency. A portion that does not change has zero, or no frequency.

(a)

(b)

Fig 3.1 Demonstration of (a) a wave (b) a wavelet

12

3.2 Why use Wavelets? There are many substantive reasons to use wavelets. The reasons may differ according the application. For example, some wavelet transforms act in a manner which divides a signal into those components which are significant in time and space, and those that contribute less. This feature enables wavelets to be very useful in applications such as noise removal, edge detection and data compression. In general, wavelets are beneficial when used to obtain further information from that signal that is not readily available in the raw signal. The transform of a signal is just another way of representing the signal. It does not change the information content present in the signal. Wavelets are localized waves that have their energy concentrated in time or space and are well suited to analysis of transient signals. To cite one example, tide forecasting is performed using wavelets. The ripples and trends in the ocean waters are transient, and this is why wavelets were chosen. For watermarking, wavelets conveniently break the image up into the approximation and details. Wavelets effectively isolate manipulatable high resolution and low resolution bandwidths so that watermark can easily be embedded in the bands which are less apparent to the human eye. The wavelet transform has the ability to decompose complex information and patterns into elementary forms. Wavelets have a track record. They have been successfully used in many other image processing applications. The process is capable of complete lossless reconstruction. None of the original data is lost. A copy of the original image and/or watermark is not needed.

3.2.1 Wavelet Families

There are a number ofbasis functions that can be used as the mother wavelet for Wavelet Transformation. Since the mother wavelet produces all wavelet functions used in the transformation through translation and scaling, it determines the characteristics of the resulting Wavelet Transform. Therefore, the details of the particular application should be taken into 13

account and the appropriate mother wavelet should be chosen in order to use the Wavelet Transform effectively.

(a)

(b)

(e)

(c)

(f)

(d)

(g)

Fig 3.2 Wavelet families (a) Haar (b) Daubechies4 (c) Coiflet1 (d) Symlet2 (e) Meyer (f) Morlet (g) Mexican Hat.

Figure 3.2 illustrates some of the commonly used wavelet functions. Haar wavelet is one of the oldest and simplest wavelet. Therefore, any discussion of wavelets starts with the Haar wavelet. Daubechies wavelets are the most popular wavelets. They represent the foundations of wavelet signal processing and are used in numerous applications. These are also called Maxflat wavelets as their frequency responses have maximum flatness at frequencies 0 and π. This is a very desirable property in some applications. The Haar, Daubechies, Symlets and Coiflets are compactly supported orthogonal wavelets. These wavelets along with Meyer wavelets are capable of perfect reconstruction. The Meyer, Morlet and Mexican Hat wavelets are symmetric

14

in shape. The wavelets are chosen based on their shape and their ability to analyze the signal in a particular application

3.3 CONTINUOUS WAVELET TRANSFORM

Unlike the continuous Fourier transform, which uses complex sinusoids as its basis functions, the wavelet transform uses waveletsas its basis functions. Each sinusoidal basis function in the Fourier transform exists over an infinite interval with constant amplitude. On the other hand, a wavelet is a fast decaying function even though it may exist over an infinite interval. The Fourier transform of a continuous-time signal is a function of the frequency only. But the wavelet transform of a continuous time signal is a function of scaleand shift. A scale as used in the wavelet transform is roughly related to the inverse frequency small scale implies large frequencies and vice versa. In the Fourier transform of a periodic signal, all the sinusoidal frequencies are not related to each other, that is, the frequencies are continuous. In wavelet transform, it is possible to have all the wavelet functions related to a single wavelet called the mother wavelet, from which all the wavelet functions are obtained by dilation and contraction. The formal definition of the CWTfollows. Given a continuous-time signal f (t),−∞ ≤ t ≤∞,its CWT W(s, τ)is defined as

W(s,τ) = (f(t), Ψ s,τ )=  ∞   , τ   ∞

(3.1)

In equation (3.1), s is the scale and τ is the shift and the basis functions ψs,t(t) arethe dilations and contractions of the mother wavelet given by

Ψ s,τ (t) =



Ψ(

√

 τ 

).

(3.2)

15

3.4 DISCRETE WAVELET TRANSFORM The Wavelet Series is just a sampled version of CWT and its computation may consume significant amount of time and resources, depending on the resolution required. The Discrete Wavelet Transform (DWT), which is based on sub-band coding, is found to yield a fast computation of Wavelet Transform. It is easy to implement and reduces the computation time and resources required If the signal, scaling functions, and wavelets are discrete in time, then the wavelet series of the discrete-time signal is called the DWT. The DWT of a sequence consists of two seriesexpansions, one corresponding to the approximation and the other to the details of the sequence. The formal definition of DWT of an N-point sequence x [n] ,0 ≤ n ≤ N − 1 is given by

(3.3)

DWT {f(t) } = W ( j ,k ) + W ( j, k) Φ 0 ψ Where W ( j ,k) = Φ 0

√

W ( j, k ) = Φ

√

∑  Φjo ,k [n] 

(3.4)

∑ Ψ j, k [n] , j ≥j0 

The sequence x [n] ,0 ≤ n ≤ N − 1 can be recovered from the DWT coefficients Wφand Wψas given by X[n] =



√

∑ Φjo ,k[n] + 



√

∑  Ψ j, k [n] 

(3.5)

The scale parameter in the second summation of equation (3.5) has an infinite number of terms. But in practice the upper limit for the scale parameter is usually fixed at some value say, J. The starting scale value j0 is usually set to zero and corresponds to the original signal. Thus, the DWT coefficients for x [n] ,0 ≤ n ≤N − 1 are computed for j = 0, 1, . . . , J − 1 and k = 0, 1, . . . , 2J− 1. Also, N is typically a power of 2, of the form N = 2J.

16

3.5 DWT Using Filter Banks Filters are one of the most widely used signal processing functions. Wavelets can be realized by iteration of filters with rescaling. The resolution of the signal, which is a measure of the amount of detail information in the signal, is determined by the filtering operations, and the scale is determined by upsampling and downsampling (subsampling) operations. The DWT is computed by successive lowpass and highpass filtering of the discrete time-domain signal as shown in figure 3.3. This is called the Mallat algorithm or Mallattree decomposition. Its significance is in the manner it connects the continuous-time mutiresolution to discrete-time filters. In the figure, the signal is denoted by the sequence x[n], where n is an integer. The low pass filter is denoted by G0 while the high pass filter is denoted by H0. At each level, the high pass filter produces detail information; d[n], while the low pass filter associated with scaling function produces coarse approximations, a[n].

Fig 3.3 Decomposition of a input image using Downsampling At each decomposition level, the half band filters produce signals spanning only half the frequency band. This doubles the frequency resolution as the uncertainity in frequency is reduced by half. In accordance with Nyquist’s rule if the original signal has a highest frequency of ω, which requires a sampling frequency of 2ω radians, then it now has a highest frequency of ω/2 radians. It can now be sampled at a frequency of ω radians thus discarding half the samples with no loss of information. This decimation by 2 halves the time resolution as the entire signal is now represented by only half the number of samples. Thus, while the half band low pass filtering removes half of the frequencies and thus halves the resolution, the decimation by 2 doubles the scale. With this approach, the time resolution becomes arbitrarily good at high frequencies, while 17

the frequency resolution becomes arbitrarily good at low frequencies. is thus resolved as shown. The filtering and decimation process is continued until the desired level is reached. The maximum number of levels depends on the length of the signal. The DWT of the original signal is then obtained by concatenating all the coefficients, a[n] and d[n], starting from the last level of decomposition.

Fig 3.4 Reconstruction of an image using Upsampling

Fig 3.4 shows the reconstruction of the original signal from the wavelet coefficients. Basically, the reconstruction is the reverse process of decomposition. The approximation and detail coefficients at every level are upsampled by two, passed through the low pass and high pass synthesis filters and then added. This process is continued through the same number of levels as in the decomposition process to obtain and the original signal. The Mallat algorithm works equally well if the analysis filters, G0and H0, are exchanged with the synthesis filters, G1 and H1. Figure 3.6 illustrates the first decomposition level (d = 1). In this level the originalimage is decomposed into four sub-bands that carry the frequency information in both the horizontal and vertical directions. In order to form multiple decomposition levels, the algorithm is applied recursively to the LL sub-band. Figure 3.7 illustrates the second (d = 2) and third (d = 3) decomposition levels as well as the layout of the different bands. There are different approaches to implement 2D DWT such as traditional Convolutionbased and lifting scheme methods. The convolutional methods apply filteringby multiplying the filter coefficients with the input samples and accumulating the results. Their

18

implementation is similar to the Finite Impulse Response (FIR) implementation. This kind of implementation needs a large number of computations.

Fig 3.5 DWT image decomposition and reconstruction

Input image

Output Image after 1-D level

Fig 3.6: Different sub-bands after first decomposition level.

19

Fig 3.7: Different sub-bands after second and third decomposition level

(a) Original image layout

(b) DWT image layout

Fig 3.8 Layouts of Original image and DWT image.

3.6 Summary DWT based on subband coding, it yields fast computation of wavelet coefficients.It is easy to implement. It reduce the computation time and resources required.In DWT the most prominent information in the signal appears in high amplitudes and less prominent information appears in very low amplitudes. So we prefer DWT .

20

CHAPTER-4 EDGE DETECTOR 4.1 What is edge and what is an edge detector?

An edge in an image is a contour across which the brightness of the image changes abruptly. In image processing, an edge is often interpreted as one class of singularities. In a function, singularities can be characterized easily as discontinuities where the gradient approaches infinity. However, image data is discrete, so edges in an image often are defined as the local maxima of the gradient. This is the definition we will use here. Edge detection is an important task in image processing. It is a main tool in pattern recognition, image segmentation, and scene analysis. An edge detector is basically high pass filter that can be applied to extract the edge points in an image. This topic has attracted many researchers and many achievements have been made.

4.1.1Introduction of the classical edge detectors

Many classical edge detectors have been developed over time. They are based on theprinciple of matching local image segments with specific edge patterns. The edge detection is realized by the convolution with a set of directional derivative masks .The popular edge detection operators are Roberts, Sobel, Prewitt, Frei-Chen, and Laplacian operators. They are all defined on a 3 by 3 pattern grid, so they are efficient and easy to apply. In certain situations where the edges are highly directional, some edge detector works especially well because their patterns fit the edges better.

4.1.2 Noise and its influence on edge detection

21

However, classical edge detectors usually fail to handle images with strong noise, as shown in fig 4.1. Noise is unpredictable combination on the original image. It is usually introduced by the transmission or compression of the image.

(a) Lena Image

(b) Edges using canny detector

(c) Lena image with noise

(d) Edges from image with noise

Fig. 4.1: Impact of noise on edge detection

There are various kinds of noise, but the most widely studied two kinds are white noise and “salt and pepper” noise. Fig. 4.1 shows the dramatic difference between the result of edge detection from two similar images, with the later one affected by some white noise.

4.1.3 Edge detector with pre-smoothing

To reduce the influence of noise, two techniques were developed from 1979 to 1984. Marrlet al. suggested filtering the images with the Gaussian before edge detection; Hueckel andHaralick proposed to approximate the image with a smooth function. The weakness of the 22

above approaches is that the optimal result may not be obtained by using a fixed operator. Canny [4] developed a computational approach to edge detection in 1991. He derived that an optimal detector can be approximated by the first derivative of a Gaussian. 4.1.4 Edge detector using wavelets

Edges in images can be mathematically defined as local singularities. Until recently, the Fourier transforms was the main mathematical tool for analyzing singularities. However, the Fourier transform is global and not well adapted to local singularities. It is hard to find the location and spatial distribution of singularities with Fourier transforms. Wavelet analysis is a local analysis; it is especially suitable for time-frequency analysis, which is essential for singularity detection. This was a major motivation for the study of the wavelet transform in mathematics and in applied domains. The wavelet transform characterizes the local regularity of signals by decomposing signals into elementary building blocks that are well localized both in space and frequency. This not only explains the underlying mechanism of classical edge detectors, but also indicates a way of constructing optimal edge detectors under specific working conditions.

4.2 Review of Classical Edge Detectors Classical edge detectors use a pre-defined group of edge patterns to match each image segments of a fixed size. 2-D discrete convolutions are used here to find the correlations between the pre-defined edge patterns and the sampled image segment.   ∗ ,  = ∑! " ∑   ,  ∗  − ,  − 

Where ‘f’ is the image and ‘m’ is the edge pattern defined by

m=

m(-1,-1)

m(-1,0)

m(-1,1)

m(0,-1)

m(0,0)

m(0,1)

m(1,-1)

m(1,0)

m(1,1)

23

(4.1)

m(i, j) = 0, if (i, j)is not in the defined grid. These patterns are represented as filters, which are vectors (1-D) or matrices (2-D). For fast performance, usually the dimension of these filters are 1×3 (1-D) or 3×3 (2-D). From the point of view of functions, filters are discrete operators of directional derivatives. Instead of finding the local maxima of the gradient, we set a threshold and consider those points with gradient above the threshold as edge points. Given the source image f(x, y), the edge image E(x, y) is given by

# = $ ∗ % +  ∗ '%

(4.2)

Where s and t are two filters of different directions.

4.3 Roberts edge detector

s=

0

0

0

0

1

0

0

0

-1

t=

The edge

0

0

0

0

1

0

0

0

-1

patterns are

shown in Fig. 4.1

(a)

(b)

Fig. 4.2 Edge patterns for Roberts edge detector: (a) s; (b) t

These filters have the shortest support, thus the position of the edges is more accurate. On the other hand, the short support of the filters makes it very vulnerable to noise. The edge pattern of this edge detector makes it especially sensitive to edges with a slope around 4 . Some computer vision programs use the Roberts edge detector to recognize edges of roads.

24

4.4 Prewitt edge detector

Where s and t are given by

-1

-1

-1

-1

-1

-1

0

0

0

0

0

0

1

1

1

1

1

1 The

edge patterns are shown in Fig. 4.3.

(a)

(b)

Fig. 4.3 Edge patterns for Prewitt edge detectors: (a) s; (b) t

These filters have longer support. They differentiate in one direction and average in the other direction. So the edge detector is less vulnerable to noise. However, the position of the edges might be altered due to the average operation.

4.5Sobel edge detector

Where s and t are given by

-1

-2

-1

-1

-2

-1

0

0

0

0

0

0

1

2

1

1

2

1 The

25

edge patterns are shown in Fig. 4.3

. (a)

(b)

Fig. 4.4 Edge patterns for Sobel edge detectors: (a)s; (b)t

These filters are similar to the Prewitt edge detector, but the average operator is more like a Gaussian, which makes it better for removing some white noise.

(a) Lena

(b)Lena image with sobel detector

Fig 4.5 Lena image with sobel edge detector

4.6 Summary In computer vision and image processing, edge detection concerns the localization of significant variations of the grey level image and the identification of the physical phenomena that originated them. This information is very useful for applications in 3D reconstruction, motion, recognition, image enhancement and restoration, image registration, image compression, and so on. Usually, edge detection requires smoothing and differentiation of the image. Differentiation is an ill-conditioned problem and smoothing results in a loss of information. It is difficult to design a general edge detection algorithm which performs well in many contexts and captures 26

the requirements of subsequent processing stages. Consequently, over the history of digital image processing a variety of edge detectors have been devised which differ in their mathematical and algorithmicproperties. This paper is an account of the current state of our understanding of edge detection. We propose an overview of research.

27

CHAPTER-5 HUMAN VISUAL SYSYTEM

5.1 Background

Human visual system (HVS) research offers mathematical models of how humans see the world around them. For example, models have been developed to characterize humans’ sensitivity to brightness and color. The contrast sensitivity function (CSF) describes humans’ sensitivity to spatial frequencies. A model of the CSF for luminance (or grayscale) images, originally proposed by Mannos and Sakrison is given by:

)*+ = 1.60.192 + 0.114 3 . 45

6.6

(5.1)

The frequency is usually measured in cycles per optical degree (cpd), Where spatial frequency f=$7% + 8% Note: Here fx and fyare spatial frequencies in the horizontal and vertical directions respectively. We normalize the spatial frequency with the following relation: f(cycles/degree)=fn(cycles/pixel)*fs(pixels/degree) Where fn is the normalized spatial frequency with units of cycles/pixel. Since our observers are sitting at an approximate viewing distance of 2 feet, fs is set to 64 pixels/degree. This value for fs is equivalent to a viewing distance of four times the image height (the image height is approximately 6 inches). In equation (3.2), since f has a range between 0 and fs/2,fn has a range between 0 and 0.5.

28

Figure 5.1: Luminance Contrast Sensitivity Function. Figure 5.1 depicts the CSF curve; it characterizes luminance sensitivity of the HVS as afunction of normalized spatial frequency. The CSF is a band pass filter: the HVS is most sensitive to spatial frequencies between 0.03 and 0.23, and less sensitive to very low and very high frequencies.

5.2 Mapping the CSF on a Mallat wavelet decomposition Let’s assume for the moment, that the image compressionshall be optimized for an image with a given resolution r measured in pixels (dots) per inch and a viewing distance v measured in meters. The sampling frequency ‘fs’in pixels per degree is then given by Eq.(5.2):

2 ∗ v ∗ tan0.5@  ∗ r f: = 5.2 0.00254 If the signal is critically down-sampled at Nyquist rate, 0.5 cyclesper pixel are obtained. That means the maximum frequencyrepresented in the signal, measured in cycles per degree, is: 29

fmax=0.5*fs The decompositions used for compression purposes can be describedby a filterbank of separable 2D-filters. The filter thatis used by default in the JPEG2000 standard is the Daubechies 9/7 biorthogonal filter . To determine the spatial frequency range that is covered in each specific sub band, it is nevertheless reasonable to simplify the filter transfer function to an optimal high- and low-pass. The successive application of these filters results in a subbandspecific band-pass range of horizontal and vertical spatial frequencies. As example let’s look in detail at the subband of level l = 2 and orientation Ψ = 2 as marked in Fig 5.3

Fig. 5.2 Relation between the CSF and a 5-level Mallat 2D wavelet decomposition.

30

It contains mostly horizontal details. In other words, it indicates where in the image the luminance varies rapidly along the vertical orientation (this is the case for horizontal edges).The horizontal spatial frequencies of this sub band range from0 to 0.25 fmax the vertical frequencies from 0.25 to 0.5 fmax..Fig 5.2 shows also two CSF’s along the vertical and horizontal frequency axis using a linear scale on x- and y-axis. The sub part of the CSF that corresponds to thefrequency range coveredby the sub band of level 2 is shaded. Hence, the weighting that must be used for the wavelet coefficients in this sub band is described by portions of two separate CSF functions. At this time, it could be argued that the real CSF function is a two dimensional function. However, the used DWT decomposition has due to its separable structure already reduced information about the signal orientation inside a sub band. Hence, the application of a 2D-CSF does not offer any advantage over a CSF that is separated in its horizontal and vertical components. Let the level-dependent frequency intervals fCB and fCD be denoted by:

fCE (l) =0......fA (l) and fCF (l) = fA (l)……. fB (l)

(5.3)

Where fA and fBare the limits of these intervals:

fA(l) = 2-lfmax and fB(l)= 2 fA(l).

(5.4)

And lis the subband level (l = 1corresponds to highest frequencies).Then the specific frequency range for each orientation (Ψ _ {0, 1, 2, 3}) is given by:

HHH HHH HHH C HHH HHH G = (CI , CJ ) = {(CE , CE ),(CE ,  F ),(F , E ),(F , F )}

(5.5)

5.3 Assumptions about the final viewing conditions

So far, the viewing conditions (r and v) were assumed as beingfixed. This may not be realistic, as an observer can look at the images from any distance. Nevertheless, fixing r and v is necessary to apply a frequency weighting. Therefore it is shown now, that with a slight modification of the CSF shape and the assumption of ”worst case viewing conditions” a CSF weighting is obtained that works properly for all different viewing distances and typical display media (screen/printer) resolutions.

31

Fig. 5.3 Typical shape of CSF for luminance and chrominance channels.

The bars on the top of the figure indicate to which level in the decomposition the band-pass frequency ranges are mapped. Fig 5.3 shows the corresponding levels.

The luminance CSF, as shown in Fig 5.3 has a maximum for spatial frequencies around 4 cycles per degree and a so-called low-frequency dip for frequencies below this. It is important, which sub band finally contains the maximum and the dip. As illustrated by horizontal bars in Fig5.1, for a viewing distance of d1 (e.g.d1 = 30cm) the maximum is in level 3, while it is in level 5 for a larger viewing distance d2 (e.g. d2 = 90cm). A weighting computed for d1 would coarsly quantize the coefficients in level 5. But viewed from a larger distance these frequencies shift to the maximum sensitivity and so artifacts become visible. To avoid this, the luminance 32

CSF has to be flattened for low-frequencies,like shown by the dash-dotted line in Fig 5.3 This change of the CSF shape causes an increased bit-rate for the lower levels. However, the affected subbands are so small that this increase is completely negligible (see Fig 5.3). More serious is the fact that a significant amount of bits would be wasted on the high frequencies if the weighting was computed for d1 and the image is finally viewed at d2 or further. Therefore, it is crucial to estimate the closest expected viewing distance for an appropriate weighting. However, for visually lossycompression, not only r and v are relevant, but also is the targeted final image quality. If the final image quality is desired to be close to visually lossless, a more conservative weighting (small d) that preserves well the high frequencies is recommended. If the quality is so bad that parts of the image are blurred and strong artifacts are present, a more aggressive weighting (large d) should be used. The key question is now to find which viewing distance should be assumed. Actually the answer was derived by the subjective quality evaluations, where several observers looked at high- and low-quality images. For high-quality they approached to the closest possible distance that the human eye can accommodate (about 15cm). In contrast, for poor quality images they viewed them from distances greater than 50cm. Now, what shall be assumed for the final media resolution? Good monitors have about 82 dpi and continuous tone printers reach around 300 dpi. In fact this 300 dpi quality of a continuous tone printer is still much better than what is reached by 1200 dpi laser printer that uses dithering to model the gray shades. From Eq. (5.1) it is clear that only the product of r and v is relevant. So, an optimization for 300 dpi and 15 cm is equivalent to that for 82 dpi and 55 cm, for example. Actually this is the distance to which observers approach if they inspect high quality images in detail on a large monitor. Finally, this means that an ideal weighting strategy for printing and on-screen viewing can be determined. The only information that is needed is an estimation of the targeted image quality.

5.4 CSF Weights

The CSF weights of each sub band and sub levels is calculated and it is shown in the fig below.

33

Fig 5.4: DWT CSF mask with 11 unique weights.

34

5.5 Summary In this chapter we compute the contrast sensitivity function weights, which are used for watermarking. It improves the image quality. Every band has many values but the maximum value will be taken. By decomposing contrast sensitivity curve on the 5-level Mallat 2D wavelet we can get the contrast sensitivity function weights. When inserting the logo into the image, that image may or may not be seen but by the csf weights e can see the image clearly.

35

CHAPTER-6 PROPOSED WORK AND RESULTS

6.1 Introduction In this chapter we explain about watermark embedding and extraction algorithms. The Quality of reconstructed image is calculated using PSNR and WPSNR. The correlation between inserted watermark and extracted watermark is calculated using correlation coefficient.

6.2 Watermark Embedding

Algorithm for Embedding Watermark in Wavelet Transform Step1: Consider the original image, convert true color image into gray scale image. Step2: Apply Discrete Wavelet Transform upto 5 levels using HAAR wavelet. Step3: Consider logo image, convert true color image into gray scale image. Step4: Convert the logo image into individual array of pixels. Step5: Calculate the number of significant pixels in each sub band. Step6: Insert the watermark pixels in the place of significant pixels. Step7: Apply IDWT to the watermarked sub bands.

36

Step1: Consider the original image, convert true color image into gray scale image.

(a) Colored Lena image

(b) Gray scale Lena image

Fig 6.1 Conversion of true color input image into gray scale image Step2: Apply Discrete Wavelet Transform upto 5 levels using HAAR wavelet.

37

(a) Original Image

(b) 5 level DWT image

Fig 6.2 Discrete Wavelet Transformed image of Original image

Step3: Consider logo image, convert true color image into gray scale image.

(a) Logo image

(b) Gray scale Logo image

Fig 6.3 Conversion of true colored logo image into gray scale image Step4: Now consider 100*100 size logo is converted into individual array pixels of size 1*10000 using reshape command.

38

(a) Logo of size 100*100

(b) Reshaped logo of size 1*10000 Fig 6.4 logo and its reshape

Step5: Calculate the number of significant pixels in each sub band. Now consider DWT sub bands and find out the number of significant pixels in each sub band.

(a)

(b)

Fig 6.5 Significant pixels in one of the sub band (a) DWT sub band (b) significant pixel in the sub band. Step6: Insert the watermark pixels in the place of significant pixels. The logo image divided into individual pixels and inserted at the significant pixel values of each sub band.The Watermark Expression is given by 39

Yu,v=Xu,v+αWlθBx,y Yu,v=Watermarked image subbands. Xu,v=Original image sub bands. Wlθ=weight of each sub band. Bx,y=logo pixels. α= Scaling factor.

Fig 6.6 Watermarked image sub bands.

Step7: Apply IDWT to the watermarked sub bands

40

(6.1)

(a)

(b)

(c)

Fig 6.7 watermarking (a) Watermarked DWT sub bands (b) Watermarked image (c) Original image

41

6.3 Watermark Extraction

Algorithm for Extracting Watermark in Wavelet Transform Step1: Apply DWT to the Watermarked Image. Step2: Now remove original image sub bands from watermarked image sub bands. Step3: Convert individual logo pixel into the single image of size 100*100.

Step1: Apply DWT to the Watermarked Image.

(a) Watermarked image

(b) Watermarked image sub bands.

Fig 6.8 watermarked image and sub bands Step2: Now remove original image sub bands from watermarked image sub bands. 42

It is the reverse process of Watermarking

Fig 6.9 Watermark extraction (a) Watermarked image sub bands (b) Original image sub bands Step3: Convert individual logo pixel into the single image of size 100*100.

(a) Individual Extracted logo pixels

(b) Extracted logo Fig 6.10 Individual extracted logo pixels and extracted logo 43

6.4 RESULTS 6.4.1MSE MSE measures the average of the squares of the "errors." The error is the amount by which the value implied by the estimator differs from the quantity to be estimated. The difference occurs because ofrandomness or because the estimator doesn't account forinformation that could produce a more accurate estimate. The equation for MSE is given by

MSE=



!

2  ∑! " ∑   ,  − , 

(6.2)

Where M and N are the size of the image, x is the original image and y is the watermarked image.

6.4.2 PSNR Peak Signal-to-Noise Ratiois the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. PSNR is most commonly used to measure the quality of reconstruction of lossy compression codecs (e.g., for imagecompression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codecs, PSNR is an approximation to human perception of reconstruction quality. Although a higher PSNR generally indicates that the reconstruction is of higher quality, in some cases the reverse may be true. One has to be extremely careful with the range of validity of this metric; it is only conclusively valid when it is used to compare results from the same codec (or codec type) and same content. The Equation for PSNR is given by

44

PSNR=10 ∗ KLM  N

OP7QR !ST

U 6.3

Here maxi is the maximum possible pixel value of the image. When the pixels are represented using 8 bits per sample, this is 255. Typical values for the PSNR in lossy image and video compression are between 30 and 50 dB, where higher is better. Acceptable values for wireless transmission quality loss are considered to be about 20 dB to 25 dB The PSNR is calculated for different images with same watermark. The Correlation between inserted watermark and extracted watermark is given in a tabular form Image name

PSNR(dB)

Lena

25.8

Living room

20.7

Cameraman

24.6

Peppers

22.5

Table 6.1 PSNR values for Different images

6.4.3 Correlation coefficient The correlation coefficient determines the correlation between two images of same size. The correlation between extracted logo and inserted watermark is calculated as

(6.4) Where X an Y are the images and ‘E’ is Expected value .

µ x and µ y are expected values of X and Y. 45

σx and σy are standard deviation of X and Y. The Correlation coefficient is calculated for different images with same watermark. The Correlation between inserted watermark and extracted watermark is given in a tabular form Image name

Correlation

Lena

0.988

Living room

0.994

Cameraman

0.883

Peppers

0.553

Table 6.2 Correlation between inserted logo and extracted logo for different images

6.4.4 Watermark Present or Not?

The watermark detection is accomplished without referring to the original image, considering the correlation between the watermarked coefficients and the watermarking sequence

W=



!

 ∑! Y ∑J XY,J ∗ ZY,J

(6.5)

Where Yu,vrepresents the watermarked perceptually Significant coefficients and Nu,vis the watermark sequence The correlation value is compared with threshold value

ρ>[\ Watermark is Present

(6.6)

ρ0.5); [r2,c2]=find(bw2>0.5); [r3,c3]=find(bw3>0.5); [r4,c4]=find(bw4>0.5); [r5,c5]=find(bw5>0.5); [r6,c6]=find(bw6>0.5); [r7,c7]=find(bw7>0.5); [r8,c8]=find(bw8>0.5); 52

[r9,c9]=find(bw9>0.5); [r10,c10]=find(bw10>0.5); [r11,c11]=find(bw11>0.5); [r12,c12]=find(bw12>0.5); [r13,c13]=find(bw13>0.5); [r14,c14]=find(bw14>0.5); [r15,c15]=find(bw15>0.5); [r16,c16]=find(bw16>0.5); ch1n=ch1; for i=1:size(r1) ch1n(r1(i),c1(i))=ch1(r1(i),c1(i))+alpha.*y{1}.*b(i); end cv1n=cv1; for i=1:size(r2) cv1n(r2(i),c2(i))=cv1(r2(i),c2(i))+alpha.*y{2}.*b(i+2700); end cd1n=cd1; for i=1:size(r3) cd1n(r3(i),c3(i))=cd1(r3(i),c3(i))+alpha.*y{3}.*b(i+5730); end ch2n=ch2; for i=1:size(r4) ch2n(r4(i),c4(i))=ch2(r4(i),c4(i))+alpha.*y{4}.*b(i+8163); end cv2n=cv2; for i=1:size(r5) cv2n(r5(i),c5(i))=cv2(r5(i),c5(i))+alpha.*y{5}.*b(i+8911); end cd2n=cd2; for i=1:229 cd2n(r6(i),c6(i))=cd2(r6(i),c6(i))+alpha.*y{6}.*b(i+9771); 53

end knew(1:16,1:16)=ca5; knew(1:16,17:32)=ch5; knew(17:32,1:16)=cv5; knew(17:32,17:32)=cd5; knew(1:32,33:64)=ch4; knew(33:64,1:32)=cv4; knew(33:64,33:64)=cd4; knew(1:64,65:128)=ch3; knew(65:128,1:64)=cv3; knew(65:128,65:128)=cd3; knew(1:128,129:256)=ch2n; knew(129:256,1:128)=cv2n; knew(129:256,129:256)=cd2n; knew(1:256,257:512)=ch1n; knew(257:512,1:256)=cv1n; knew(257:512,257:512)=cd1n; ca4w=idwt2(ca5,ch5,cv5,cd5,'haar'); ca3w=idwt2(ca4w,ch4,cv4,cd4,'haar'); ca2w=idwt2(ca3w,ch3,cv3,cd3,'haar'); ca1w=idwt2(ca2w,ch2n,cv2n,cd2n,'haar'); J=idwt2(ca1w,ch1n,cv1n,cd1n,'haar'); axes(handles.axes4) imshow(J) title('Watermarked image'); [da1 dh1 dv1 dd1]=dwt2(J,'haar'); [da2 dh2 dv2 dd2]=dwt2(da1,'haar'); [da3 dh3 dv3 dd3]=dwt2(da2,'haar'); [da4 dh4 dv4 dd4]=dwt2(da3,'haar'); [da5 dh5 dv5 dd5]=dwt2(da4,'haar'); for j=1:size(r1) 54

v1=dh1(r1(j),c1(j))-ch1(r1(j),c1(j)); f(1,j)=v1/(alpha*y{1}); end for j=1:size(r2) v2=dv1(r2(j),c2(j))-cv1(r2(j),c2(j)); f(1,2700+j)=v2/(alpha*y{2}); end for j=1:size(r3) v3=dd1(r3(j),c3(j))-cd1(r3(j),c3(j)); f(1,5730+j)=v3/(alpha*y{3}); end for j=1:size(r4) v4=dh2(r4(j),c4(j))-ch2(r4(j),c4(j)); f(1,8163+j)=v4/(alpha*y{4}); end for j=1:size(r5) v5=dv2(r5(j),c5(j))-cv2(r5(j),c5(j)); f(1,8911+j)=v5/(alpha*y{5}); end for j=1:229 v6=dd2(r6(j),c6(j))-cd2(r6(j),c6(j)); f(1,9771+j)=v6/(alpha*y{6}); end g=reshape(f,100,100); axes(handles.axes3) imshow(g) title('Extracted logo'); I=im2uint8(I); J=im2uint8(J); I=I/255; J=J/255; 55

P=psnr(J,I); C=corr2(ax,g); R=WPSNR(J,I); sum=0; for i=1:100 for j=1:100 sum=sum+knew(i,j).*ax(i,j); end end a=512*512; d=sum/10000; sum1=0; for i=1:512 for j=1:512 sum1=sum+knew(i,j).*knew(i,j); end end d1=sum1/a^2; c=3.97*sqrt(2*d1^2); set(handles.text3,'String',P) set(handles.text4,'String',R) set(handles.text5,'String',C)

7.2 Matlab code for CSF weights %sampling frequency %viewing distance v %screen resolution gamma gamma=82; v=55; j=gamma/0.0254; 56

fs=2*v*tan(0.5)*j; fmax=fs/2; fmax=fmax/1000; f=0:50; a=exp(-0.114*f); b=a.^(1.1); c=2.6*(0.192+0.114*f).*b; % k=5*f/fmax; plot(f,c);

% %weight for hh1 band h1(1:24)=c(27:50); v1(1:24)=c(27:50); for i=1:24 csfhh3(i)=sqrt(h1(i)^2+v1(i)^2); end h1(1:24)=c(27:50); v1(1:24)=c(1:24); for i=1:24 csfhl3(i)=sqrt(h1(i)^2+v1(i)^2); end h1(1:12)=c(16:27); v1(1:12)=c(16:27); for i=1:12 csfhh2(i)=sqrt(h1(i)^2+v1(i)^2); end h1(1:12)=c(12:23); v1(1:12)=c(1:12); for i=1:12 csfhl2(i)=sqrt(h1(i)^2+v1(i)^2); end 57

h1(1:6)=c(5:10); v1(1:6)=c(1:6); for i=1:6 csfhl1(i)=sqrt(h1(i)^2+v1(i)^2); end h1(1:3)=c(1:3); v1(1:3)=c(1:3); for i=1:3 csfhl0(i)=sqrt(h1(i)^2+v1(i)^2); end

58

REFERENCES [1] An Introduction to Digital image processing with Matlab by alasdiarMcAndrew. [2] Still Image and Video Compression with Matlab by K.S.Thyagarajan. [3] Digital Image Processing by Gonzalez and Woods. [4] M. Barni, F. Bartolini,and A. Piva, “Improved wavelet-basedwatermarking through pixel wisemasking”, IEEE Trans. ImageProcessing, vol. 10, no. 5, pp. 783–791, 2001. [5] X. Xia, C. G. Boncelet, and G. R. Arce, “A multiresolution watermark for digital images,” in IEEE Proc. Int. Conf. Image Processing, USA, 1997, pp. 548-551. [6] J. R. Kim, and Y. S. Moon, “A robust wavelet-based digital watermarking using leveladaptive thresholding,” in IEEE Proc. Int.Conf. Image Processing, Japan, 1999, pp. 226-230. [7] C. T. Hsu, and J. L. Wu, “Hidden digital watermarks in images,” IEEE Trans. Image Processing, vol. 8, no. 1, pp. 58–68, Jan. 1999. [8] R. Dugad, K. Ratakonda, and N. Ahuja, “A new wavelet-based scheme for watermarking images,” in IEEE Proc. Int. Conf. Image Processing,USA, 1998, pp. 419-423. [9] R. B. Wolfgang, C. I. Podilchuk, and E. J. Delp, “Perceptual watermarks for digital images and video,” in SPIE Proc. Int. Conf. Security andwatermarking of multimedia contents, USA, 1999, pp. 40-51. [10] C. De Vleeschouwer, J. F. Delaigle, and B. Macq, “Invisibility and application functionalities in perceptual watermarking an overview,” Proc. IEEE, vol. 90, no. 1, pp. 64-77, 2002. [11] B. A. Wandell, “Foundations of Vision”, Sinauer Associates Inc., Sunderland MA, 1995. [12] J. Mannos, and D. Sakrison, “The effects of a visual fidelity criteriononthe encoding of images,” IEEE Trans. Information Theory, vol. 20, pp. 525-536, 1974.

59