Passive Detection of Copy-Move Forgery using

0 downloads 0 Views 781KB Size Report
resizing, blurring, compression, addition of noise, image splicing etc. ... copy-move forgery based on wavelet transforms and SIFT ..... sorted neighbourhood approach for detecting duplicated .... pattern recognition, machine learning etc.
Journal of Information Assurance and Security. ISSN 1554-1010 Volume 9 (2014) pp. 197-204 © MIR Labs, www.mirlabs.net/jias/index.html

Passive Detection of Copy-Move Forgery using Wavelet Transforms and SIFT Features Mohammad Farukh Hashmi1, Aaditya Hambarde2, Vijay Anand3and Avinash Keskar4 Department of Electronics Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur 440010, India [email protected] 2adihambarde1993@@gmail.com [email protected] [email protected]

Abstract: Image forgery is a major issue today in publishing and printing. Several images are morphed before publishing in order to incorporate extra information. The problem becomes more complicated with different means of image capture. There exist a variety of cameras with different resolutions and encoding techniques. Many times the forged image is compressed or resized before publishing. Detecting forgery in such cases is a challenging task. Different techniques of tampering include resizing, blurring, compression, addition of noise, image splicing etc. The most common type of digital image forgery is copy-move forgery. In this paper, a passive technique for detecting copy-move forgery based on wavelet transforms and SIFT features is proposed. The wavelet transforms employed are Discrete Wavelet Transform (DWT) and Dyadic Wavelet Transform (DyWT). The image is divided into four sub-bands viz. LL, LH, HL and HH by the wavelet transform. Since the LL sub-band contains most of the information, SIFT is applied on the LL part only, to extract the key features and find descriptor vector of these key features and then find similarities between various descriptor vectors to conclude that the given image is forged. Keywords: tampering, wavelet transforms, copy-move forgery, scale invariant feature transform (SIFT), dyadic wavelet transform

I. Introduction Since the invention of photography, images have been retouched and manipulated. Now-a-days the adage ‘seeing is believing’ does not hold. In the last decade, large amount of digital image data was manipulated in papers, tabloids, fashion magazines, court room videos etc. Detection of image forgery is a challenging task due to various methods of tampering the image and a large number of distinct image capturing devices available. Most of the forgery detection techniques are categorized into two major categories: active and passive. Active methods, also known as non-blind/intrusive methods, require some information to be embedded in the original image when the image is captured. Due to this pre-requisite, active methods have limited scope. Some of the examples of these methods are watermarking and use of digital signature of the camera. All cameras don’t possess this feature. Passive methods, also known as non-intrusive/blind methods, don’t

require any information to be embedded in the digital image. A digital image can be forged by various attacks like rotation, scaling, resizing, addition of noise, blurring, compressing, image splicing, copy-move etc. In copy-move attack, a part of image is copied and pasted in a different area of the same image. The ease of copy-move forgery makes it the most common forgery that is used to alter the content of an image [14]. Usually the forger forges the image with an intention of hiding some object by covering it with an image segment that is copied from the same image. For example, a photograph of a crime scene may be tampered with, by hiding critical data. Another reason of copy-move forgery may be to add extra information to the image. For instance, images of political rallies are manipulated to show more number of attendees. A popular example of copy- move forgery is shown in figure 1. In the original image, there were just three missiles. This image was manipulated to show four missiles by copying and pasting one of the missiles in the same image. In this type of forgery, since the image-patch comes from the same image, the color-palette and noise components are mostly similar to the original image. Requirements of a good copy-move detection algorithm are:  The algorithm must detect an approximate match between small image patches. A common assumption of copy-move detection algorithms is that the forged part is a connected component rather than an aggregation of pixels.  It must not be computationally complex and must detect a forged image correctly as forged and an authentic image correctly as authentic.

Figure 1. Example of Copy-Move Forgery

MIR Labs, USA

198

Hashmi et al.

II. Related Work

III. Proposed Technique

The distinguishing feature of copy-move forgery is that the copied patch and the pasted patch are the same. Hence one method of forgery detection is exhaustive search [15].However this method is not feasible due to its computational complexity. Also, there are chances that the copied part is processed upon further by noise addition, geometric distortion and filtering. This increases the complexity. However the correlation introduced between the copied and pasted area is the basis of most detection algorithms. Only passive detection algorithms [12] are discussed here. Birajdar et al. [20] has summarized all the passive techniques for digital image forgery detection. Al-Qershi et al. [25] addressed key issues in developing a robust copy-move forgery detector. Qazi et al. [26] presents a comprehensive study of blind methods for forgery detection. Fridrich et al. [5] introduced a block matching method based on Discrete Cosine Transform (DCT). In this method, the image was divided into overlapping blocks and quantized DCT coefficients of image blocks were used as features. Lexicographical sorting was then performed and copied regions were outputted, based on the similarity of features. Major advantage of using DCT is that most of the energy is concentrated in first few DCT coefficients and most of the other coefficients are zero. Thus changes in high frequency region like noise addition etc. won’t affect the detection procedure. Popescu et al. [4] proposed a similar method as that of Fridrich’s. This method utilized Principal Component Analysis (PCA) for dimension reduction. Number of features generated is half of that of Fridrich’s thus establishing it to be a better method. Drawback of this method is that it fails when the copied region is re-sampled through rotation or scaling. Li et al. [6] devised a sorted neighborhood method based on Discrete Wavelet Transform (DWT) and application of Singular Value Decomposition (SVD). Though the overall process speeds up due to reduction in dimension by DWT, this method is very computationally complex since calculation of SVD takes a lot of time. Bayram et al. [19] introduced a detection method based on Fourier Mellin Transform (FMT). Hao-Chiang Hsu et al. [10] proposed copy-move forgery detection by detecting the similarity using feature extraction by Gabor filter. Leida Li et al. [8] devised a method for copy-move forgery detection using Local Binary Patterns. M.Qiao et al. [9] used curvelet statistics for detection of copy-move forgery after dividing the image into several overlapping blocks. Local visual features like SURF, SIFT, GLOH are robust to several geometrical transformations like rotation, occlusions, clutter and scaling. Hence they are being extensively used for image forgery detection. Hwei-Jen Lin et al. [11] proposed fast copy move forgery detection using block based algorithm with radix sort for improving computational efficiency. But this algorithm is only suitable for resisting JPEG compression and Gaussian noise attacks. Amerini et al. [7], deals with detecting copy-move forgery using a SIFT based method. SIFT is invariant to resizing, rotation etc. Muhammad et al. [21] proposed an efficient copy-move forgery detection method undecimated Dyadic Wavelet Transform (DyWT).

What can be concluded from the above discussion is that primary advantage in using DWT is reduction in dimension(less number of features) and the primary advantage in using SIFT is its robustness. The proposed method combines the two to give an optimal solution. The method can detect an image as forged even if the copied part is rotated or scaled and then pasted. First the test image is converted into grayscale format if it is in RGB format. The technique gave similar results for both RGB and gray-scale images. DWT is applied on the image (up to level 1). The image gets divided in to 4 sub-bands- LL, HH, LH and HL. Then SIFT is applied to the LL part only. SIFT was used to extract strong or key features from the image. A search is performed for occurrence of same features at different locations in the image. Image blocks that return similar SIFT features from all four images are marked as forged regions. The block diagram of the technique is illustrated in figure 2. Further DyWT was used in the technique. This improved results to an extent. When the image was transformed using DyWT, the key-points found were greater. The method was tested using the MICC-F220 database [7]. Database of forged colored images released by Institute of Automation, Chinese Academy (CASIA) was also used.

IMAGE

LL

HL

LH

HH

DWT

Apply SIFT

SIFT Features

Extract Key features

Match repeating features

Mark that region as forged

Figure 2. Proposed Technique

199

Passive Detection of Copy-Move Forgery using Wavelet Transforms and SIFT Features

n n A. Discrete Wavelet Transform (DWT) reduced to a size of 2 2  2 2 at the next level. Haar DWT was DWT is a linear transform that operates on vector whose used in the proposed method [3]. length is an integer multiple of two, while separating the data vector into different frequency components. The process of one dimensional DWT is shown in figure 3. ’h’ and ‘g’ denote B. Dyadic Wavelet Transform (DyWT) DWT is not optimal for data analysis. To overcome this low-pass and high-pass respectively. Output of low-pass filter (after factor two down-sampling) is the approximation shortcoming, Mallat and Zhong [22] introduced the DyWT. coefficients and the output of high-pass filter (after factor two There is no down-sampling in DyWT like in DWT. Size of the down-sampling) is the detail coefficients or wavelet image is reduced at every level by DWT. But by using DyWT, coefficients. It is the output of the low-pass filter which is used the size of the image remains same. At each level the image is for the next level of transform. Outputs of the two filters (after divided in 4 sub-images. They are labelled as LL, LH, HL and HH. Let I be the image. Let h[ k ] be the low-pass filter and let the down-sampling) are given by (1) and (2). g[k ] be the high-pass filter. Starting at j  0 and I  I 0 ,  a j  1[ p]   h[n  2 p]a j [n] (1) arrive at the coefficients (at j=j) using the equations below. n    (3) c j 1[n]   h[k]c j [n  2 j.k]  (2) d j  1[ p]   g[n  2 p]a j [n] k   n    d j 1[n]   g[k]c j [n  2 j.k] 2-D DWT is similar to 1-D DWT. DWT is performed first for (4) k   all the image rows and then the image columns.

After inserting 2 j  1 zeros in h[ k ] and g[k ] , let the filters

x [n]

h[n]

2

g[n]

2

Approximation coefficients

Detail coefficients

Figure 3. 1-D DWT

ROWS

IMAGE

h[n]

COLUMNS

2

h[n]

2

LL

g[n]

2

LH

obtained be h j [k ] and g j [ k ] respectively. The algorithm [23] is described below: 

0 At j  0 , I  I .



Obtain the coefficients at j  j



Filter

I j 1

with h

 Filter

I j 1

with g

j 1

j 1

[k ] .

[k ] .

1-D DyWT is shown in figure 5. Figure 6 shows the DyWT decomposition of an image.

h j 1[k ]

cj

dj

c j 1 g[n]

2

h[n]

2

HL

g j 1[k ]

g[n]

2

HH

Figure 5. 1-D DyWT

Figure 4. DWT of an image The 2-D DWT process is shown in figure 4.Wavelet decomposition up to 1 level is shown. The image is reduced in to 4 sub-images at each level which are labeled as LL, HL, LH, and HH. Most of the data is concentrated in the LL sub-image and it is considered as the approximation of the image. It represents the coarse level coefficients of the original image. It is the LL sub-image which is decomposed in to four sub-images at the next level. Size of the image is reduced at every level by the DWT transform. This means lesser data to compute since reduction in the size of the image at each level is achieved. For example, a square image of size 2n  2n is

h j 1[k ]

LL

g j 1[k ]

LH

h j 1[k ]

HL

g j 1[k ]

HH

h j 1[k ]

c

j 1

g j 1[k ]

Figure 6. DyWT decomposition of an image

200

Hashmi et al.

C. Scale Invariant Feature Transform (SIFT) In any image there are a lot of points of interest which can be extracted to provide feature description of the image. The SIFT algorithm can be used to locate a particular object in an image which contains many different objects. SIFT algorithm provides a set of features which do not get modified even by scaling or rotation. Another distinct advantage is that the SIFT is very resilient to noise in the image. A four stage filtering approach is used in the SIFT algorithm [1]. 1) Space Scale Extrema Detection In this step points which can be identified from different views are located. The most efficient way is by using a Gaussian space-scale function. The scale space is defined by (5). I  x,?y  is the input image and G  x,?y   is the Gaussian function. Key-points in the image are found by the Difference of Gaussian (DoG) technique. This technique was proposed by Lowe et al. [1]. DoG is given by (6). This function is compared with its eight neighbors on the same scale and with nine neighbors up one scale and down one scale. It is a point of extrema if it is minimum or maximum among all these 26 values [18].

L  x,?y    G  x,?y   * I  x,?y 

(5)

D  x,?y    L  x,?y k   L  x,?y  

(6)

2) Key point localization This stage eliminates those key-points which have poor contrast. This is done by applying the Laplacian operator on the points found in the previous stage. If the value of this function lies below a certain threshold then this value is excluded. This eliminates points of low contrast. The method to eliminate a poorly localized pixel along the edge is described next. There is a large principle curvature across the edge but a small curvature in a perpendicular direction in DoG function [1][18]. The 2  2 Hessian matrix at the location is calculated and the ratio of the largest to smallest Eigen vector obtained is calculated. If the difference be less than this ratio, that particular key-point is eliminated. 3) Assignment of orientation In this step every key-point is assigned an orientation based on the local image properties. The steps are as follows: a) Calculate the Gaussian smoothed image L by the scale function described above. b) Calculate the magnitude gradient m and the orientation  as in (7) and (8).

It is possible that a key-point may be assigned multiple orientations. 4) Key point descriptor Key-point descriptors are created by using the local gradient data used above. The gradient information is rotated so as to align it with the orientation of the key point and then weighted by a Gaussian function of variance 1.5*key-point scale [13]. This is used to create a set of histograms over a square whose center is the key point. Usually a set of 16 histograms each with 8 elements is used by the key point descriptors. This results in the feature vectors having 128 elements. These feature vectors are then used to identify possible objects in the digital image.

IV. Comparative Study A. Detection based on DyWT Muhammad et al. [24] proposed a detection algorithm based on DyWT. What the algorithm did was divide the image into 4 sub-images based on DyWT. Then the LL region is further divided in to overlapping blocks and matching between similar features is done. In the HH region on the other hand, dissimilarities are checked on the basis of Euclidean distance. This algorithm is not robust to rotation and scaling attacks. B. Detection based on DWT and SIFT Forgery detection using SIFT was already performed. SIFT is invariant to geometrical attacks. The authors have implemented forgery detection based on DWT and SIFT. DWT is a multi-resolution technique which can extract strong corners and edges perfectly. C. Detection based on DyWT and SIFT This algorithm too is based on DyWT. Since Dyadic Wavelet Transform is not affected by shift, it represents the image information in a better way than the Discrete Wavelet Transform. Key features are extracted by combination of DyWT, SIFT and the RANSAC algorithm. There are many advantages of this method over the previous ones. Firstly, this algorithm is robust to attacks like geometrical transformation, resizing and any combination of these. Secondly, the key features extracted by this algorithm were more than those extracted by the detection algorithm based on DWT and SIFT.

V. Results m( x, y)  (L( x  1, y)  L( x  1, y))2  (L( x, y  1)  L( x, y  1))2

 ( x, y )  tan1(

L  x, y  1  L  x, y  1 L  x  1, y   L  x  1, y 

)

(7) (8)

c) An orientation histogram is formed from gradient orientations of sample points. Peaks in histogram are located and using these peaks and any other peaks 0.8 times those of the highest peak are used to create an orientation.

A. Performance parameters Before presenting the results, a few parameters are defined. They measure the performance of algorithms, hence called performance parameters. Performance of any system can be measured in terms of sensitivity, specificity and accuracy. Sensitivity relates to the ability of the algorithm to detect a forged image correctly as forged. It is also called as recall. Specificity relates to the ability of the algorithm to identify an authentic image correctly as authentic. Hence a high value of sensitivity and specificity are required for better performance

201

Passive Detection of Copy-Move Forgery using Wavelet Transforms and SIFT Features of the system. Precision is the probability of truly detecting a forgery. It is also known as Positive Predictive Value (PPV). A few terms are first defined. TP (True Positive): Forged image identified as forged FP (False Positive): Authentic image identifies as forged TN (True Negative): Authentic image identified as authentic FN (False Negative): Forged image identified as authentic sensitivity 

TP TP  FN

Authentic Tampered TP TN FP FN 1000 1000 664 982 18 336 Table 1. Results obtained with DWT and SIFT

Sensitivity Specificity Accuracy FPR 0.664 0.982 0.823 0.018 Table 2. Performance of the system

FNR 0.336

(9) C. Robustness to different attacks

TN TN  FP TP  TN accuracy  TN  FP  TP  FN TP (PPV: Positive Predictive Value) PPV  TP  FP TN (NPV: Negative Predictive Value) NPV  TN  FN specificity 

(10) (11) (12) (13)

FPR  1  specificity (FPR: False Positive Rate)

(14)

FNR  1  sensitivity (FNR: False Negative Rate)

(15)

Robustness of the algorithm was checked by testing images where the copied part was rotated or scaled or both and then pasted. Images tested with various attacks and the results obtained are demonstrated in figure 8. Table 3 gives the key-points, matches detected and time required for computation with different types of attacks. The computational complexity with or without any attack remains the same, proving the robust nature of the technique towards various attacks. However the number of matching features detected was maximum when the copied area was pasted without any rotation or scaling.

B. Detection based on DWT and SIFT

8(a)

7(a)

7(b)

8(b)

7(c )

7(d)

Figure 7. (a)Original image, (b) Tampered image, (c) DWT of the image, (d) Matching of similar features 8(c) All computations were performed on a machine with 4 GB RAM and an Intel Core i5 processor. The software used was MATLAB R2012a. 2000 images were tested using the proposed method. 1000 of the images were forged and 1000 were original. For the image shown in figure 7, 325 key-points were found and a total of 22 matching features were detected. The time required for computation was 2.893 seconds. It was found that smaller the size of the test image and the copied part, lesser were the number of features found. Average time of computation was 2.1 seconds.

8(d) Figure 8. (a)Without rotation or scaling, (b) Scaling, (c) Rotation, (d) Rotation and scaling

202

Hashmi et al.

Type of attack

Key-points Matches Computation found detected time (s) Without rotation 366 28 0.77 /scaling Scaling 381 19 0.88 Rotation 370 20 1.04 Rotation and 380 22 0.84 scaling Table 3. Robustness to various attacks

D. Comparison with existing techniques Accuracy of various methods and the proposed method was compared. Zhang-2008[16] achieved an accuracy of 77.32% with the tampered region size 64x64. The method proposed by Popescu-2004[4] managed an accuracy of around 90% for a tampered block size of 128x128 in an image of size 512x512 and JPEG quality factor of 85.Li-2009[17] was able to manage an accuracy of 47.21%. However this low value of accuracy was compensated by high values of sensitivity and specificity. The proposed method achieved an accuracy of 82.30% with 1000 forged and 1000 authentic images.

detected accurately. Thus the algorithm is robust to various scaling and rotation attacks.

10(a)

10(b)

10(c)

10(d)

Figure 10. (a)Tampered image, (b) DyWT of the image, (c) LL part of the image, (d) Matching of similar features

11(a)

11(c)

11(b)

11(d)

Figure 9. Comparison with existing methods

E. Detection based on DyWT and SIFT The complete detection process is shown in Figures 10(a), 11(e) 11(f) 10(b), 10(c) and 10(d). 10(a) is the tampered image. 10(b) represents the DyWT of the image. Only the LL part is shown in 10(c). Feature matching is applied on only the LL part. The Figure 11. (a)Simple copy-move forgery, (b) Rotation to a matching feature points are shown in 10(d). For the image smaller degree, (c) Rotation to a higher degree, (d) Scaling shown in figure 10, 748 key points were found and the number attack, (e)Rotation and scaling attacks, (f)Scaling to a lower of matching points was 46. The time required for detection level was 0.962806 seconds. The images shown in Figure 11 depict the robust nature of this algorithm. In these images, the copied part is first pre-processed and then pasted. The degree by F. Comparison between different methodologies which the copied part is rotated is also varied. In 11(b) the copied part is rotated by a smaller degree than in 11(c). In A comparison between detection by SIFT alone, by DWT and 11(f), the copied part was scaled to a lesser extent than in 11(d). SIFT & by DyWT and SIFT is presented below. Performance The copied part is both rotated and scaled in 11(e). Lesser parameters-precision, recall and FPR are used to compare the keypoints and matching features were obtained in this case. As three methods. As illustrated by Table 4, the detection is seen from the figure, forgery in all the cases is getting algorithm based on DWT and SIFT achieves the maximum

Passive Detection of Copy-Move Forgery using Wavelet Transforms and SIFT Features precision. However for a single image, the algorithm based on DyWT and SIFT was found to be more accurate, since it gave larger number of keypoints and matching features. It is also less complex than the method proposed by Muhammad et al [24]., which uses DyWT and block matching. Hashmi et al. [2] covered the algorithm concerning DWT and SIFT.

Method SIFT DWT+SIFT DyWT+SIFT

TP 745 664 810

TN 964 982 900

FP 36 18 100

FN 255 336 190

Table 4. Comparison of different methodologies Method SIFT DWT+SIFT DyWT+SIFT

Precision 0.95391 0.97361 0.89011

Recall 0.745 0.664 0.810

FPR 0.036 0.018 0.010

Table 5. Comparison of performance parameters

Method

Matching Computational feature points complexity SIFT Intermediate More DWT+SIFT High Least DyWT+SIFT Highest Intermediate Table 6. Comparison of complexity Method

Robustness to scaling

Robustness to rotation

Robustness to noise (AWGN) None

DyWT+Block None Less matching DWT+SIFT Average Intermediate Low DyWT+SIFT More More High Table 7. Robustness to different types of attacks

Figure 12. Comparison between different methodologies

VI. Conclusion Various techniques of image tampering were studied. Previous algorithms for copy-move forgery detection were reviewed in this paper. The proposed technique uses both wavelet

203

transform and SIFT. The wavelet transforms used were DWT and DyWT. DWT reduces the image data and further process only relevant information (low frequency information). Robustness was introduced by SIFT. Hence the proposed algorithm detects image forgery even if the copied part is rotated/scaled and then pasted. Overall accuracy was found to be 82.30% with a database of 2000 images (1000 authentic and 1000 forged images). The proposed method gave reasonable values of sensitivity and specificity. Robustness of the proposed algorithm was checked by testing tampered images where the copied part is rotated or scaled and then pasted. The proposed method was compared with the existing methods. A further attempt was made to use DyWT instead of DWT to improve the performance. It was observed that with DyWT, a larger number of keypoints and matching features were found. In future it is proposed to experiment with other feature extractors like SURF etc.

Acknowledgment This work was partially funded by Center of Excellence, Department of Electronics Engineering, Visvesvaraya National Institute of Technology, Nagpur. We are also indebted to Dr. N.S.Chaudhari, Director, VNIT, Nagpur for the received administrative assistance.

References [1] Lowe, David G., "Distinctive image features from scale-invariant keypoints", International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004. [2] Hashmi, M.F., Hambarde, A.R., Keskar, A.G.: Copy Move Forgery Detection using DWT and SIFT Features. In Proc. of 13th IEEE International Conference on Intelligent Systems Design and Applications (ISDA 2013), pp. 188–193 (December 2013). [3] Haar, Alfred. "Zur theorie der orthogonalen funktionensysteme." Mathematische Annalen vol. 69, no. 3, pp. 331-371, 1910. [4] Popescu, Alin C., and Hany Farid, "Exposing digital forgeries by detecting duplicated image regions", Department of Computer Science, Dartmouth College, Technical Report. TR2004-515, August 2004. [5] Fridrich, A. Jessica, B. David Soukal, and A. Jan Lukáš, "Detection of copy-move forgery in digital images", In Proceedings of Digital Forensic Research Workshop. , Cleveland, OH, USA, pp. 55-61, August 2003. [6] Li, Guohui, Qiong Wu, Dan Tu, and Shaojie Sun. "A sorted neighbourhood approach for detecting duplicated regions in image forgeries based on DWT and SVD", In Proceedings of IEEE International Conference on Multimedia and Expo., pp.1750-1753,2007. [7] Amerini, Irene, Lamberto Ballan, Roberto Caldelli, Alberto Del Bimbo, and Giuseppe Serra. "A sift-based forensic method for copy–move attack detection and transformation recovery", IEEE Transactions on Information Forensics and Security, vol. 6, no. 3 ,pp. 1099-1110, 2011. [8] Li, Leida, Shushang Li, Hancheng Zhu, Shu-Chuan Chu, John F. Roddick, and Jeng-Shyang Pan. "An Efficient Scheme for Detecting Copy-move Forged Images by Local Binary Patterns.", Journal of Information Hiding

204

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

Hashmi et al. and Multimedia Signal Processing, vol. 4, no.1,pp. 15-56,January 2013. Qiao, Mengyu, Andrew Sung, Qingzhong Liu, and Bernardete Ribeiro. "A novel approach for detection of copy-move forgery." In proceedings of Fifth International Conference on Advanced Engineering Computing and Applications in Sciences (ADVCOMP 2011), pp. 44-47, 2011. Hsu, Hao-Chiang, and Min-Shi Wang. "Detection of copy-move forgery image using Gabor descriptor." In proceedings of IEEE International Conference on Anti-Counterfeiting, Security and Identification (ASID-2012), pp. 1-4, 2012. Lin, Hwei-Jen, Chun-Wei Wang, and Yang-Ta Kao. "Fast copy-move forgery detection." WSEAS Transactions on Signal Processing, vol. 5, no. 5, pp. 188-197, 2009. Bayram, Sevinc, Husrev Taha Sencar, and Nasir Memon. "A survey of copy-move forgery detection techniques." In Proc. of IEEE Western New York Image Processing Workshop, pp. 538-542. 2008. Gilani, S. Asif M. "Object Recognition by Modified Scale Invariant Feature Transform." In Proc. of Third IEEE International Workshop on Semantic Media Adaptation and Personalization (SMAP-2008), pp. 33-39, 2008. Jing, Li, and Chao Shao. "Image Copy-Move Forgery Detecting Based on Local Invariant Feature." Journal of Multimedia vol. 7, no. 1, pp.90-97, 2012. Yadav, Preeti, and Yogesh Rathore. "Detection of Copy-Move Forgery of Images Using Discrete Wavelet Transform." International Journal on Computer Science and Engineering, vol. 4, no. 4, pp.365-370, April 2012. Zhang, Jing, Zhanlei Feng, and Yuting Su. "A new approach for detecting copy-move forgery in digital images." In Proc. of 11th IEEE Singapore International Conference on Communication Systems (ICCS 2008), pp. 362 - 366, 19-21 Nov2008. Li, Weihai, Yuan Yuan, and Nenghai Yu. "Passive detection of doctored JPEG image via block artifact grid extraction." Signal Processing, vol. 89, no. 9, pp. 1821–1829, September 2009. Badrinath, G. S., and Phalguni Gupta. "Palmprint verification using sift features." In Proc. of IEEE First Workshops on Image Processing Theory, Tools and Applications (IPTA2008), pp. 1-8, 2008. Bayram, S., Sencar, H.T., Memon, N.: An efficient and robust method for detecting copymove forgery. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2009), pp. 1053–1056 (2009). Birajdar, G.K., Mankar, V.H.: Digital image forgery detection using passive techniques: A survey. Digital Investigation 10(3), 226–245 (2013). Muhammad, G., Hussain, M., Khawaji, K., Bebis, G.: Blind copy move image forgery detection using dyadic undecimated wavelet transform. In: Proceedings of IEEE 17th International Conference on Digital Signal Processing (DSP 2011), pp. 1–6 (2011) Mallat, S., Zhong, S.: Characterization of signals from multiscale edges. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(7), 710–732 (1992).

[23] Muhammad, N., Hussain, M., Muhammad, G., Bebis, G.: Copy-move forgery detection using dyadic wavelet transform. In: Proceedings of IEEE Eighth International Conference on Computer Graphics, Imaging and Visualization (CGIV 2011), pp. 103–108 (2011) [24] Muhammad, G.,Hussain, M., Bebis, G.: Passive copy move image forgery detection using undecimated dyadic wavelet transform. Digital Investigation 9(1), 49–57 (2012). [25] Al-Qershi, Osamah M., and Bee Ee Khoo. "Passive detection of copy-move forgery in digital images: State-of-the-art." Forensic science international 231, no. 1 (2013): 284-295. [26] Qazi, Tanzeela, Khizar Hayat, Samee U. Khan, Sajjad A. Madani, Imran A. Khan, Joanna Kołodziej, Hongxiang Li, Weiyao Lin, Kin Choong Yow, and Cheng-Zhong Xu. "Survey on blind image forgery detection." IET Image Processing 7, no. 7 (2013): 660-670.

Author Biographies Mohammad Farukh Hashmi The author received his B.E in Electronics & Communication Engineering from R.G.P.V Bhopal University. He obtained his M.E. in Digital Techniques & Instrumentation in 2010 from R.G.P.V Bhopal University. He is currently pursuing his doctoral studies at VNIT Nagpur under the supervision of Dr.A.G.Keskar. His current research interests are Computer Vision, Circuit Design, and Digital IC Design etc. He is a member of IEEE, ISTE.

Aaditya Hambarde The author was born in Mumbai in 1993. He is an undergraduate student at Visvesvaraya National Institute of Technology, Nagpur. He will receive his degree (B.Tech) in Electronics & Communication Engineering in September, 2015. His current research interests include computer vision, pattern recognition, machine learning etc.

Vijay Anand The author received his degree (M.Tech) in Communication & Systems Engineering from Visvesvaraya National Institute of Technology, Nagpur. He completed his undergraduate studies (B.Tech) in Electronics & Communication Engineering from Amrita School of Engineering, Bangalore in 2012. His research interests include computer vision, signal processing etc.

Avinash Keskar The author completed his B.E. from VNIT, Nagpur in 1979 and received gold medal for the same. He completed his M.E. from IISc, Bangalore in1983, receiving the gold medal again. He has 26 years of teaching experience and 7 years of industrial experience. He is currently a PROFESSOR at Department of Electronics Engineering, VNIT Nagpur. Dr.Keskar is a senior member of IEEE, FIETE, LMISTE, FIE.