Lossless Multiresolution Transform for Image Authenticating ... - eurasip

5 downloads 1759 Views 25KB Size Report
e-mail: [email protected] ; URL: http://www.tele.ucl.ac.be/MEMBERS/Macq_Benoit_e.html. Abstract. This paper ... Key-words: image watermarking, multiresolution lossless transform, image ... most used method is to add on the picture a.
Lossless Multiresolution Transform for Image Authenticating Watermarking Benoît Macq Laboratoire TELE, Université catholique de Louvain, Louvain-la-Neuve, BELGIUM 2 Place du Levant, 1348 Louvain-la-Neuve, BELGIUM Tel : +32 10 47 22 71 ; Fax : +32 10 47 20 89 e-mail: [email protected] ; URL: http://www.tele.ucl.ac.be/MEMBERS/Macq_Benoit_e.html Abstract This paper present a new algorithm able to achieve a one-to-one mapping of a digital image onto another image. This mapping takes into account the limited resolution and the limited quantization of the pixels. The mapping is achieved in a multiresolution way. Performing small modifications on statistics of the details allows to build a lossless, i.e. revertible, watermarking authenticating procedure. Key-words: image watermarking, multiresolution lossless transform, image authentication

mandatory for avoiding malevolent users to embed false information as watermarks. • invisible: this allows an end-user to get a satisfactory access to the image. • "fragile": any attempt to manipulate the image, over-watermark or whatsoever should lead to the destruction of the watermark. This property contrary to robustness (in the case of copyright protection) allows a usage of the watermark as a certification tool.

Watermarking [Macq99] [Hartung99] is a process by which an information is embedded secretly in a media (text, image or sound). Most efforts related to watermarking have been devoted to the use of watermarks for tracing works, in order to detect forgeries and illegal copies. Watermarking has also been studied as a tool for multimedia data authentication [Kundur99]. It allows in this case to certify the origin of the data and to guarantee that the signal has not been modified between the image producer and the user/verifier. The requirements for that kind of watermarks are completely different.

Several embedding procedures have been proposed to perform watermarking. The most used method is to add on the picture a pattern with good correlation properties (generally provided by the spread-spectrum theory) leading to a robust reading procedure. This can be achieved in the spatial or in the frequency domain. This addition may be weighted by perceptual factors. It is to notice that such additions can generally not be reversed due to the limited range of the pixel values: each time the addition of the watermark pattern to the original image goes out of the pixel range the watermarked image has to be truncated making the exact subtraction of the watermark impossible.

They have to be • secret: if the watermark conveys identifier of the image features or a link to its header, e.g. a hash of this header, the watermarking procedure, both for the embedding process and the extracting process, have to bc secret, or parametrised by a secret key. This secrecy of the procedure is

An other mean for watermarking is to quantize some secretly chosen potential values among all the ones representing an image, fixing some of them to vehiculate a bit " 1 ", others corresponding to a bit " 0". This was proposed for DCT values differences, for spatial block-mean value differences or for wavelet coefficients [Kundur99]. All these techniques lead to

1. Introduction

irreversible image (most often slight) damages due to this quantization procedure. The approach we want to focus on is the watermarking by modifying some statistical properties of the image. This path has been followed by Bender and Gruhl who applied it for mean values of random partitions of the image. This method has been mostly rejected due to its weak robustness. In the Bender and Gruhl watermarking procedure, the image is randomly divided in two equally sized subset ai and bi. A bit is written in the image by adding a given value ±b on the ai pixels and by subtracting the b value to the bi pixels. So the bit can be retrieved of the image assuming that if the partition is random, then the mean value of the ai will be the same that the mean value of the bi. In our case, the goal of the watermarking is to validate the link between the root header of the image and the image part of the file itself. We propose to hash this root part and to watermark this hash in the image. A receiver who wants to verify the trust of the link between the header and the image will hash the header and verify if it is embedded in the received picture. Furthermore, the reader should have a visualization procedure allowing him to visualize a perfect copy of the original image, without the watermark. For such a goal, we are only interested in a reading procedure determining whether the known watermark (the hash of the header is known by the reader) is in the image or not: the need relies just in a statistical test procedure (valid or net). Although tamper-proofing algorithms are available in the literature, we suggest to develop a specific method for tackling the problem of reversibility. We propose the use of a multiresolution image transformation leading to transform coefficients with specific statistics. By separating the image contents into a lowresolution part and its corresponding highresolution details part, we obtain a set of transform coefficients, those corresponding

to the details, having statistically weak values. An event, i.e. a higher value in the details can correspond to edges or specific textures. We can also choice randomly some specific locations in which we add a shift value to encode a " 1". If we perform that shift on some locations and if on those locations the values have statistically higher values than normally for a usual picture, we may conclude that a " 1" has most probably written in the image. 2. Lossless Multiresolution Transform A key issue in existing watermarking is that they rely on the addition of a watermark in the pixel domain or in a transform domain. This addition lead to some "out of range" situation when pixels or transform coefficients close to the bounds of the range will be modified. This impose some truncation in which some information is lost. We propose the use a pseudosum/pseudo-difference in which a pair of pixel, e.g. two bytes, are transform by a one to one mapping of this pair to a pair of pseudo-sum/pseudo-difference. This allows to get perfect reversibility for any potential value. We propose the following one-to-one original mapping procedure (let us assume that the pixels are encoded on n bits, i.e. 2n potential values, and let us assume that we fix our mapping on a significant training set of typical images ). •

We compute for each potential pair of pixels, let’s say in the horizontal direction first, its frequency (simply by counters associated to those pair of values incremented at each occurrence).



The 2n values constituted by identical pixels are assigned to the pseudodifference "0".



The pseudo-sum is this identical value.



Then we separate all the pairs of values having the left pixel value higher than the right one from the others, The first set of pairs will gel negative values, the second one positive values.



For each of these two subset





1

We start with the pseudodifference equal to 1

2

We allocate this pseudodifference to the 2n most probable values of the subset.

3

We sort the mean values of those 2n pairs of pixels and we allocate the pseudo-sum from "0" to 2n to the so-classified values.

4

We remove from the subset those allocated values and we go back to step 1.

At the end of the procedure the pseudo-differences –2n-1 and 2 n-1 have each only 2 n-1 values and we merge the values having the pseudo difference 2 n-1 to those having the -2 n-1 value and we suppress the set of pseudo-difference 2 n-1 (in fact we are counting from -2 n-1 to 2 n-1 - 1) and we apply step 3 here above to the obtained set. The pseudo-sum are treated following the same procedure by grouping pairs of pixels in the vertical direction.

This procedure can be repeated if necessary on the low-resolution part of the image. 3. The proposed watermarking method We propose to apply the watermarking in the lossless multiresoltuion transform domain. It is important to notice that the probability density functions of image

differences is generally strongly distributed around zero, their mean value (the procedure we have chosen for the lossless multiresolution transform leads straightforwardly to this result) Now we propose to embed a certain amounts of M bits (160 if its obtained by SHA), by selecting M * N pseudodifferences locations grouped into M subsets of N pseudo-differences. Each pseudodifferences of one subset will be modify to convey one bit of the watermark. This selection-partitioning is driven by the secret key of the watermarking algorithm. Now, each time we have to write a "0" we do nothing, each time we have to write a " 1" we add, with the addition modulo-n, a b value to the absolute value of the all the pseudo-differences of the corresponding subset of that bit. A very simple detection method relies in the computation of the mean of the absolute values of the set of pseudo-differences corresponding to a given bit. If it is below a threshold to be fixed we can say that the encrusted bit is "0", it is 1" otherwise. For the " 1" bits we have to subtract modulo-n the b value. Notice that in our method, our purpose is to confirm the "ones" encrusted in the partitions as we only want to validate the presence of a hash value inside the picture. A simpler test consists in subtracting modulo-n the b values on the absolute value of pseudo-differences marked with a " 1". If the energy of this pseudo-difference, i.e., its absolute value is decreased we can conclude that a " 1" was effectively written. A special case has to be taken into account for high values to which b has been added. For this values the b subtraction will lead from a small value to a high value and nothing can bc concluded from them. The goal being to validate the hash value, eventually repeated many times, this is not very annoying.

References [Macq99] Macq B., Scanning the Special Issue on Identification and Protection of Multimedia Information, Proceedings of the IEEE, vol. 87, no. 7, July 1999, pp. 10591061 [Hartung99] Hartung F. and Kutter M., Multimedia Watermarking techniques, Proceedings of the IEEE, vol. 87, no. 7, July 1999, pp. 1062-1078 [Voyatzis98] Voyatzis G. and Pitas I., Digital image watermarking using miring systems, Computer Graphics, vol. 22, no. 3, 1998, pp. 405-416 [Kundur99] Kundur D. and Hatzinakos D., Digital Watermarking for Telltale Tamper Proofing and Authentication, Proceedings of the IEEE, vol. 87, no. 7, July 1999, pp. 1167-1180