Combining SVM classifiers for email anti-spam filtering - CiteSeerX

8 downloads 255492 Views 129KB Size Report
anti-spam email filters based on the SVM. ... Errors in anti-spam email filtering are strongly asymmetric. Thus ... However, the choice of a good dissimilarity for the.
Combining SVM classifiers for email anti-spam filtering ´ Angela Blanco

Manuel Mart´ın-Merino



Abstract Spam, also known as Unsolicited Commercial Email (UCE) is becoming a nightmare for Internet users and providers. Machine learning techniques such as the Support Vector Machines (SVM) have achieved a high accuracy filtering the spam messages. However, a certain amount of legitimate emails are often classified as spam (false positive errors) although this kind of errors are prohibitively expensive. In this paper we address the problem of reducing particularly the false positive errors in anti-spam email filters based on the SVM. To this aim, an ensemble of SVMs that combines multiple dissimilarities is proposed. The experimental results suggest that the new method outperforms classifiers based solely on a single dissimilarity and a widely used combination strategy such as bagging.

1

Introduction

Unsolicited commercial email also known as spam is becoming a serious problem for Internet users and providers [9]. Several researchers have applied machine learning techniques in order to improve the detection of spam messages. Naive Bayes models are the most popular [2] but other authors have applied Support Vector Machines (SVM) [8], boosting and decision trees [5] with remarkable results. SVM has revealed particularly attractive in this application because it is robust against noise and is able to handle a large number of features [21]. Errors in anti-spam email filtering are strongly asymmetric. Thus, false positive errors or valid messages that are blocked, are prohibitively expensive. Several authors have proposed new versions of the original SVM algorithm that help to reduce the false positive errors [20, 14]. In particular, it has been suggested that combining non-optimal classifiers can help to reduce particularly the variance of the predictor [20, 4, 3] and consequently the misclassification error. In order to achieve this goal, different versions of the classifier are usually built by sampling the patterns or the features [4]. However, in our application it is expected that the aggregation of strong classifiers will help to reduce more the false positive errors [18, 11, 7]. ∗

Universidad Pontificia de Salamanca, C/Compa˜ n´ıa 5, 37002, Salamanca, Spain. Emails: [email protected], [email protected]

In this paper we address the problem of reducing the false positive errors by combining classifiers based on multiple dissimilarities. To this aim, a diversity of classifiers is built considering dissimilarities that reflect different features of the data. The dissimilarities are first embedded into an Euclidean space where a SVM is adjusted for each measure. Next, the classifiers are aggregated using a voting strategy [13]. The method proposed has been applied to the Spam UCI machine learning database [19] with remarkable results. This paper is organized as follows. Section 2 introduces the dissimilarities considered by the ensemble of classifiers. Section 3 presents our method to combine classifiers based on dissimilarities. Section 4 illustrates the performance of the algorithm in the challenging problem spam filtering. Finally, section 5 gets conclusions and outlines future research trends.

2

The problem of distances revisited

An important step in the design of a classifier is the choice of the proper dissimilarity that reflects the proximities among the objects. However, the choice of a good dissimilarity for the problem at hand is not an easy task. Each measure reflects different features of the dataset and no dissimilarity outperforms the others in a wide range of problems. In this section, we comment shortly the main differences among several dissimilarities that can be applied to model the proximities among emails. Let x, y be the vectorial representation of two emails. The Euclidean distance is defined as: v u d uX deuclid (~x, ~y ) = t (xi − yi )2 ,

(1)

i=1

where d is the dimensionality of of the vectorial representation and xi is the value of feature i in the email x. The Euclidean distance evaluates if the features considered differ significantly in both messages. This measure is sensible to the size of the body email. The cosine dissimilarity reflects the angle between the emails x and y and is defined as: dcosine (~x, ~y ) = 1 −

~xT ~y , k~xkk~y k

(2)

The value is independent of the message length. It differs significantly from the Euclidean distance when the data is not normalized. The correlation measure checks if the features that codify the spam change in the same way in both emails and it is defined as:

d X

(xi − x ¯)(yi − y¯)

i=1

dcor. (~x, ~y ) = 1 − v , v u d u d uX uX t (x − x ¯)2 t (yj − y¯)2 i i=1

(3)

j=1

Correlation based measures tend to group together samples whose features are linearly related. The correlation differs significantly from the cosine if the mean of the vectors that represents the emails are not zero. The correlation measure introduced earlier is distorted by outliers. The Spearman rank coefficient avoids this problem by computing a correlation between the ranks of the features. It is defined as: d X

(x0i − x¯0 )(yi0 − y¯0 )

dspearm. (x~0 , y~0 ) = 1 − v i=1 , v u d u d uX uX t (x0 − x¯0 )2 t (y 0 − y¯0 )2 i j i=1

(4)

j=1

where where x~0i = rank(~xi ) and y~0 j = rank(~yj ). Notice that this measure doesn’t take into account the information about the quantitative values of the features. Another kind of correlation measure that helps to overcome the problem of outliers is the kendall-τ index which is related to the Mutual Information probabilistic measure. It is defined as: d X d X

dkend-tau (~x, ~y ) = 1 −

Cxij − Cyij

i=1 j=1

d(d − 1)

,

(5)

where Cxij = sign(xi − xj ) and Cyij = sign(yi − yj ). The above discussion suggests that the normalization of the data should be avoided because this preprocessing may partially destroy the disparity among the dissimilarities. Besides when the emails are codified in high dimensional and noisy spaces, the dissimilarities mentioned above are affected by the ‘curse of dimensionality’ [1, 15]. Hence, most of the dissimilarities become almost constant and the differences among dissimilarities are lost [12, 16]. This problem can be avoided selecting a small number of features before the dissimilarities are computed.

3

Combining classifiers based on dissimilarities

The SVM is a powerful machine learning technique that is able to work with high dimensional and noisy data [21]. However, the original SVM algorithm is not able to work directly from a dissimilarity matrix. To overcome this problem, we follow the approach of [17]. First, each

dissimilarity is embedded into an Euclidean space such that the inter-pattern distances reflect approximately the original dissimilarities. Next, the test points are embedded via a linear algebra operation and finally the SVM is adjusted and evaluated. Let D ∈ Rn×n be a dissimilarity matrix made up of the object proximities. A configuration in a low dimensional Euclidean space can be found via a metric multidimensional scaling algorithm (MDS) [6] such that the original dissimilarities are approximately preserved. Let X = [~x1 , . . . , ~xn ]T be the matrix of the object coordinates for the training patterns. Define B = XX T as the matrix of inner products which is related to the dissimilarity matrix via the following equation: 1 B = − JD2 J , 2

(6)

where J = I − n1 11T ∈ Rn×n is the centering matrix and I is the identity matrix. If B is positive definite, the object coordinates in the low dimensional space Rk can be found through a singular value decomposition [6, 10]: 1/2

X k = Vk Λ k ,

(7)

where Vk ∈ Rn×k is an orthogonal matrix with columns the first eigen-vectors of XX T and Λk ∈ Rk×k is a diagonal matrix with the corresponding eigenvalues. Several dissimilarities introduced in section 2 generate inner product matrices B non-definite positive. To avoid this problem, we have added a non-zero constant to non-diagonal elements of the dissimilarity matrix [17]. Once the training patterns have been embedded into a low dimensional space, the test pattern can be added to this space via a linear projection [17]. Next we detail briefly the process. Let X ∈ Rn×k be the object configuration for the training patterns in Rk and Xn = [~x1 , . . . , ~xs ]T ∈ Rs×k the matrix of the object coordinates sought for the test patterns. Let Dn2 ∈ Rs×n be the matrix of dissimilarities between the s test patterns and the n training patterns that have been already projected. The matrix Bn ∈ Rs×n of inner products among the test and training patterns can be found as: 1 Bn = − (Dn2 J − U D2 J) , 2 where J ∈ Rn×n is the centering matrix and U = products verifies

1 T n1 1

(8)

∈ Rs×n . Since the matrix of inner

Bn = Xn X T

(9)

then, Xn can be found as the least mean-square error solution to (9), that is: Xn = Bn X(X T X)−1 ,

(10)

1/2

Given that X T X = Λ and considering that X = Vk Λk be obtained as:

−1/2

Xn = Bn Vk Λk

the coordinates for the test points can

,

(11)

which can be easily evaluated through simple linear algebraic operations. Next we introduce the method proposed to combine classifiers based on different dissimilarities. Our method is based on the evidence that different dissimilarities reflect different features of the dataset (see section 2). Therefore, classifiers based on different measures will missclassify a different set of patterns. Figure 1 shows for instance that bold patterns are assigned to the wrong class by only one classifier but using a voting strategy the patterns will be assigned to the right class.

Figure 1: Aggregation of classifiers using a voting strategy. Bold patterns are missclassified by a single hyperplane but not by the combination. Hence, our combination algorithm proceeds as follows: First, the dissimilarities introduced in section 2 are computed. Each dissimilarity is embedded into an Euclidean space, training and test pattern coordinates are obtained using equations (7) and (11) respectively. To increase the diversity of classifiers, once the dissimilarities are embedded a bootstrap sample of the patterns is drawn. Next, we train a SVM for each dissimilarity and bootstrap sample. Thus, it is expected that misclassification errors will change from one classifier to another. So the combination of classifiers by a voting strategy will help to reduce the misclassification errors. A related technique to combine classifiers is the Bagging [4, 3]. This method generates a diversity of classifiers that are trained using several bootstrap samples. Next, the classifiers are aggregated using a voting strategy. Nevertheless there are three important differences between bagging and the method proposed in this section. First, our method generates the diversity of classifiers by considering different dissimilarities and thus will induce a stronger diversity among classifiers. A second advantage of our method is that it is able to work directly with a dissimilarity matrix. Finally, the combination of several dissimilarities avoids the problem of choosing a particular dissimilarity for the application we are dealing with. This is a difficult and time consuming task.

Notice that the algorithm proposed earlier can be easily applied to other classifiers such as the k-nearest neighbor algorithm that are based on distances.

4

Experimental results

Eigenvalues

0

100

200

300

400

In this section, the ensemble of classifiers proposed is applied to the identification of spam messages. The spam collection considered is available from the UCI Machine learning database [19]. The corpus is made up of 4601 emails from which 39.4% are spam and 60.6% legitimate messages. The number of features considered to codify the emails are 57 and they are described in [19]. The dissimilarities have been computed without normalizing the variables because this preprocessing may increase the correlation among them. As we have mentioned in section 3, the disparity among the dissimilarities will help to improve the performance of the ensemble of classifiers. Once the dissimilarities have been embedded in a Euclidean space, the variables are normalized to unit variance and zero mean. This preprocessing improves the SVM accuracy and the speed of convergence. Regarding the ensemble of classifiers, an important issue is the dimensionality in which the dissimilarity matrix is embedded. To this aim, a metric Multidimensional Scaling Algorithm is first run. The number of eigenvectors considered is determined by the curve induced by the eigenvalues. For the dataset considered, figure 2 shows that the first twenty eigenvalues preserve the main structure of the dataset. Anyway, the sensibility to this parameter is not high provided that the number of eigenvalues chosen is large enough. For the dataset considered values larger than twenty give good experimental results.

0

5

10

15

20

25

30

k

Figure 2: Eigenvalues for the Multidimensional Scaling Algorithm with the cosine dissimilarity. The combination strategy proposed in this paper has been also applied to the k-nearest neighbor classifier. An important parameter in this algorithm is the number of neighbors which has been estimated using 20% of the patterns as a validation set.

The classifiers have been evaluated from two different points of view: on the one hand we have computed the misclassification errors. But in our application, false positives errors are very expensive and should be avoided. Therefore false positive errors are a good index of the algorithm performance and are also provided. Finally the errors have been evaluated considering a subset of 20% of the patterns drawn randomly without replacement from the original dataset.

Method Euclidean Cosine Correlation Manhattan Kendall-τ Spearman Bagging Euclidean Combination

Linear kernel Error False positive 8.1% 4.0% 19.1% 15.3% 18.7% 9.8% 12.6% 6.3% 6.5% 3.1% 6.6% 3.1% 7.3% 3.0% 6.1% 3%

Polynomial kernel Error False positive 15% 11% 30.4% 8% 31% 7.8% 19.2% 7.1% 11.1% 5.4% 11.1% 5.4% 14.3% 4% 11.1% 1.8%

Parameters: Linear kernel: C=0.1, m=20; Polynomial kernel: Degree=2, C=5, m=20

Table 1: Experimental results for the ensemble of SVM classifiers. Classifiers based solely on a single dissimilarity and Bagging have been taken as reference. Table 1 shows the experimental results for the ensemble of classifiers using the SVM. The method proposed has been compared with bagging introduced in section 3 and with classifiers based on a single dissimilarity. The m parameter determines the number of bootstrap samples considered for the combination strategies. C is the standard regulation parameter in the C-SVM [21]. From the analysis of table 1, the following conclusions can be drawn: • The combination strategy improves significantly the the Euclidean distance which is usually considered by most SVM algorithms. • The combination strategy with polynomial kernel reduces significantly the false positive errors of the best single classifier. The improvement is smaller for the linear kernel. This can be explained because the non-linear kernel allow us to build classifiers with larger variance and therefore the combination strategy can achieve a larger improvement of the false positive errors. • The combination strategy proposed outperforms a widely used aggregation method such as Bagging. The improvement is particularly important for the polynomial kernel. Table 2 shows the experimental results for the ensemble of k-NNs classifiers. k denotes the number of nearest neighbors considered. As in the previous case, the combination strategy proposed improves particularly the false positive errors of classifiers based on a single distance.

Method Euclidean Cosine Correlation Manhattan Kendall-τ Spearman Bagging Combination

Error 22.5% 23.3% 23.2% 23.2% 21.7% 11.2% 19.1% 11.5%

False positive 9.3% 14.0% 14.0% 12.2% 6% 6.5% 11.6% 5.5%

Parameters: k = 2

Table 2: Experimental results for the ensemble of k-NN classifiers. Classifiers based solely on a single dissimilarity and Bagging have been taken as reference We also report that Bagging is not able to reduce the false positive errors of the Euclidean distance. Besides, our combination strategy improves significantly the Bagging algorithm. Finally, we observe that the misclassification errors are larger for k-NN than for the SVM. This can be explained because the SVM has a higher generalization ability when the number of features is large.

5

Conclusions and future research trends

In this paper, we have proposed an ensemble of classifiers based on a diversity of dissimilarities. Our approach aims to reduce particularly the false positive errors of classifiers based solely on a single distance. Besides, the algorithm is able to work directly from a dissimilarity matrix. The algorithm has been applied to the identification of spam messages. The experimental results suggest that the method proposed help to improve both, misclassification errors and false positive errors. We also report that our algorithm outperforms classifiers based on a single dissimilarity and other combination strategies such as bagging. As future research trends, we will try to apply other combination strategies that assign different weight to each classifier. Acknowledgments. This work was partially supported by the Junta de Castilla y Le´on grant PON05B06.

References [1] C. C. Aggarwal, Re-designing distance functions and distance-based applications for high dimensional applications, in Proc. of the ACM International Conference on Management of Data and Symposium on Principles of Database Systems (SIGMOD-PODS), vol. 1, March 2001, pp. 13–18.

[2] I. Androutsopoulos, J. Koutsias, K. V. Chandrinosand C. D. Spyropoulos. An experimental comparison of naive bayesian and keyword-based anti-spam filtering with personal E-mail Messages. In 23rd Annual Internacional ACM SIGIR Conference on Research and Development in Information Retrieval, 160-167, Athens, Greece, 2000. [3] E. Bauer and R. Kohavi, An empirical comparison of voting classification algorithms: Bagging, boosting, and variants, Machine Learning, vol. 36, pp. 105–139, 1999. [4] L. Breiman, Bagging predictors, Machine Learning, vol. 24, pp. 123–140, 1996. [5] X. Carreras and I. M´arquez. Boosting Trees for Anti-Spam Email Filtering. In RANLP01, Forth International Conference on Recent Advances in Natural Language Processing, 58-64, Tzigov Chark, BG, 2001. [6] T. Cox and M. Cox, Multidimensional Scaling, 2nd ed. New York: Chapman & Hall/CRC Press, 2001. [7] P. Domingos. MetaCost: A General Method for Making Classifiers Cost-Sensitive. ACM Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), 155-165, San Diego, USA, 1999. [8] H. Drucker, D. Wu and V. N. Vapnik. Support Vector Machines for Spam Categorization. IEEE Transactions on Neural Networks, 10(5), 1048-1054, September, 1999. [9] T. Fawcett. “In vivo” spam filtering: A challenge problem for KDD. ACM Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), 5(2), 140-148, December, 2003. [10] G. H. Golub and C. F. V. Loan, Matrix Computations, 3rd ed. Baltimore, Maryland, USA: Johns Hopkins University press, 1996. [11] S. Hershkop and J. S. Salvatore. Combining Email Models for False Positive Reduction. ACM Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), 98107, August, Chicago, Illinois, 2005. [12] C. C. A. A. Hinneburg and D. A. Keim, What is the nearest neighbor in high dimensional spaces? in Proc. of the International Conference on Database Theory (ICDT). Cairo, Egypt: Morgan Kaufmann, September 2000, pp. 506–515. [13] J. Kittler, M. Hatef, R. Duin, and J. Matas, On combining classifiers, IEEE Transactions on Neural Networks, vol. 20, no. 3, pp. 228–239, March 1998. [14] A. Kolcz and J. Alspector. SVM-based filtering of E-mail Spam with content-specific misclassification costs. In Workshop on Text Mining (TextDM’2001), 1-14, San Jose, California, 2001. [15] M. Mart´ın-Merino and A. Mu˜ noz, A new Sammon algorithm for sparse data visualization, in International Conference on Pattern Recognition (ICPR), vol. 1. Cambridge (UK): IEEE Press, August 2004, pp. 477–481.

[16] M. Mart´ın-Merino and A. Mu˜ noz, Self organizing map and Sammon mapping for asymmetric proximities, Neurocomputing, vol. 63, pp. 171–192, 2005. [17] E. Pekalska, P. Paclick, and R. Duin, A generalized kernel approach to dissimilarity-based classification, Journal of Machine Learning Research, vol. 2, pp. 175–211, 2001. [18] F. Provost and T. Fawcett. Robust Classification for Imprecise Environments. Machine Learning, 42, 203-231, 2001. [19] UCI Machine Learning Database. Available from: www.ics.uci.edu/ mlearn/MLRepository.html [20] G. Valentini and T. Dietterich, Bias-variance analysis of support vector machines for the development of svm-based ensemble methods, Journal of Machine Learning Research, vol. 5, pp. 725–775, 2004. [21] V. Vapnik, Statistical Learning Theory. New York: John Wiley & Sons, 1998.