Computational Intelligence techniques for Web ... - Semantic Scholar

8 downloads 0 Views 347KB Size Report
leading to the new research area named Web mining. [21,23 .... session can be defined as a limited set of URLs corre- ...... [23] O. Etzioni, The world wideWeb: Quagmire or gold mine, ..... [83] J. Pitkow and K. Bharat, WEBVIZ: ATool forWorld-.
253

Web Intelligence and Agent Systems: An International Journal 6 (2008) 253–272 DOI 10.3233/WIA-2008-0140 IOS Press

Computational Intelligence techniques for Web personalization G. Castellano, A.M. Fanelli and M.A. Torsello ∗ Computer Science Department, Via Orabona, 4 - 70126 Bari, Italy

Abstract. Computational Intelligence (CI) paradigms reveal to be potential tools to face under the Web uncertainty. In particular, CI techniques may be properly exploited to handle Web usage data and develop Web-based applications tailored on users preferences. The main rationale behind this success is the synergy resulting from CI components, such as fuzzy logic, neural networks and genetic algorithms. In fact, rather than being competitive, each of these computing paradigms provides complementary reasoning and searching methods that allow the use of domain knowledge and empirical data to solve complex problems. This paper focuses on the major Computational Intelligent combinations applied in the context of Web personalization, by providing different examples of intelligent systems which have been designed to provide Web users with the information they search, without expecting them to ask for it explicitly. In particular, this paper emphasizes the suitability of hybrid schemes deriving from the profitable combination of different CI methodologies for the development of effective Web personalization systems. Keywords: Computational Intelligence, neuro-fuzzy model, Web personalization, Web recommendation, Web usage mining

1. Introduction In the last years, the huge quantity of information available on the Web and the ever increasing number of users which daily connect to the network make more urgent the need to personalize the Web information space. When browsing the Web, users feel very often disoriented, facing with the continuously expanding problem of the information overload. Web personalization is one of the most important remedies to this problem, by supporting users during their navigational activity and offering them personalized services. According to the definitions found in literature, Web personalization may be considered as the set of all the actions which are able to tailor the information or services provided by a Web site to the needs of a particular user or a set of users, taking advantage of the knowledge gained from the users’ navigational behavior and individual interests, in combination with the content and the structure of the Web site. A key feature in Web personalization is that the process of adaptation * Corresponding

author. E-mail: [email protected].

is executed in automatic manner without requiring the intervention of users [68,70]. Web personalization plays an important role in a growing number of applications, such as e-commerce, e-business, adaptive Web systems, information retrieval, and so on. Depending on the particular context, personalization functions which can be offered may be different, ranging from the customization to the recommendation of interesting items. For example, e-commerce represents one of the most popular Web personalization applications. In this context, personalization offers the function of suggesting products or advertising to online customers. This function is generally realized through recommendation systems [14,85,90]. In e-business, Web personalization additionally provides mechanisms to learn more about customer needs, identify future trends and eventually increase customer loyalty to the provided service [1]. In adaptive Web sites, personalization is intended to improve the organization and presentation of the Web site by tailoring information and services so as to match the unique and specific needs of users [9,27]. In practice, adaptive sites can make popular pages more accessible, highlight interesting links, connect related pages,

c 2008 – IOS Press and the authors. All rights reserved 1570-1263/08/$17.00 

254

G. Castellano et al. / Computational Intelligence techniques for Web personalization

and cluster similar documents together [81]. Finally, in information retrieval, personalization is regarded as a way to reflect the user preferences in the search process, so that users can find out more appropriate results to their queries [45]. In the development of a Web personalization system, two main challenging problems have to be addressed: how to discover useful knowledge from the Web under uncertainty and how to exploit this knowledge in order to make intelligent decisions for Web users. Computational Intelligence (CI) provides valid tools to cope with such problems. CI embraces a variety of computing paradigms that work synergistically to exploit the tolerance for imprecision, uncertainty, approximate reasoning, and partial truth in order to provide flexible information processing capabilities and obtain low-cost solutions and close resemblance to human-like decision making. In recent years, many research efforts have been addressed to investigate the applicability of CI techniques (neural networks, fuzzy systems, genetic algorithms and combinations of these) in the wide domain of the Web, arising a new flourishing research direction, known as Computational Web Intelligence (CWI) [112]. In particular, researchers have drawn their attention towards the application of CI techniques for the development of Web personalization tasks, especially in processes based on the Web usage mining methodology. Many review works confirmed this interest [2,41,78,88]. This paper focuses on the application of CI techniques in the wide domain of Web personalization. In particular, the employment of intelligent techniques for user profiling and Web recommendation will be overviewed, with special focus on the use of hybrid techniques such as neuro-fuzzy systems. The paper is organized as follows. In Section 2 we deeply deal with the topic of Web personalization, focusing on the employment of Web usage mining techniques for the development of Web applications endowed with personalization functions. Section 3 presents the model of a general Web usage based personalization system, outlining the different steps involved in its scheme. Section 4 motivates the use of CI techniques for the development of Web personalization systems. In particular, we overview existing systems for Web personalization based on different CI methods, such as fuzzy models, neural networks, evolutionary approaches and hybrid strategies. As an example of hybrid approach used for Web personalization, in Sec-

tion 5 we describe a neuro-fuzzy Web personalization system for the dynamic suggestion of interesting links to a user visiting the site. In Section 6 some conclusive remarks are drawn.

2. Web personalization Broadly speaking, Web personalization is intended as the process of adapting the content and/or the structure of a Web site in order to provide users with the information they are interested in [21,68,70]. Several definitions have been used for Web personalization. In Mobasher et al. [65], Web personalization is simply defined as the task of making Web-based information systems adaptive to the needs and interests of individual users. Typically, a system endowed with personalization functionalities is able to recognize its users, acquire knowledge about their preferences and adapt its services, in order to meet the users’ interests. To accomplish these tasks, many different approaches have been proposed in the literature of this topic [49,58,65]. In the most of traditional personalization systems, the personalization process requires substantial manual work and, most of the time, significant effort for the user. One way to expand the personalization of the Web is consisted in making automatic the process of adaptation of Web-based services to the needs of their users. Machine learning methods have been widely employed in similar tasks, such as automating the construction and adaptation of information systems [54,84,105]. Furthermore, the integration of machine learning techniques in larger process models, such as that of Knowledge Discovery in Data (KDD or Data Mining), can offer a complete solution to the adaptation task. Data Mining has been used to analyze Web data and extract useful knowledge from these, leading to the new research area named Web mining [21,23,48,78]. Based on several research studies, in the area of Web mining three important sub-areas can be distinguished, namely Web content Mining, Web structure mining and Web usage mining: – Web content mining, as the name suggests, finds useful information in the content of Web pages; for example textual data included in a Web page such as words or tags, pictures, downloadable files, and so on.

G. Castellano et al. / Computational Intelligence techniques for Web personalization

– Web structure mining tries to discover knowledge from the structure of the hyperlinks at the interdocument level. Web structure mining aims at generating a structural summary about the Web site and Web pages considering as the main data source the structural information present in Web pages such as links to other pages. – Web usage mining is that branch of Web mining that deals with the extraction of knowledge from usage data generated by the visits of the users to a Web site, especially those that are contained in Web log files. The last decade has seen the flourishing of research in the area of Web mining and especially of Web usage mining. There is a high number of papers that provide several overviews about the advances of research in this field [1,3,14,15,25,82]. Research in the field of Web usage mining represents a source of ideas and solutions for the development of Web personalization systems. In fact, usage data, such as Web log files stored by the server whenever users access a considered Web site, represent the interactions between the users and that particular Web site. Web usage mining provides an approach to the collection and preprocessing of this kind of data, and creates patterns representing the behaviour and the interests of users. Hence, these models can be automatically exploited by a personalization system, without the explicit request of intervention of a human expert, for realizing the personalization functions. Models obtained through the application of Web usage mining process represent a kind of knowledge which constitutes the operational knowledge for Web personalization.

3. General scheme of a Web personalization system In general, regardless the application context, three main steps are always performed in a usage-based Web personalization process [66]: – Preprocessing: Web data are collected and preprocessed to obtain data in a form that is compatible to be analyzed in the next step; – Pattern discovery: the collected data are analyzed in order to extract correlations between data and discover usage patterns;

255

– Personalization: the extracted knowledge is employed to implement the effective personalization functions. More precisely, in the overall process of usagebased Web personalization two principal and related modules can be identified: an offline and an online module. In the offline component, collected Web usage data are preprocessed and the specific usage mining tasks are performed in order to derive the knowledge useful for the implementation of personalization functions. Hence, the offline component is generally faced with the preprocessing and the pattern discovery activities. The online module mainly comprises a personalization engine which exploits the knowledge derived by the offline activities in order to provide users with interesting information according to their needs and interests. Figure 1 depicts the general scheme of a Web personalization system. In the following sections, each step of the personalization process is more deeply examined. In particular, we provide an overview of works that proposed different solutions to the implementation of each step. 3.1. Web data preprocessing The initial step of a Web personalization process is preprocessing of Web log data. Since log files contain a large amount of data, a preliminary data preprocessing (filtering irrelevant data, predicting missing values, removing noise, and so on) is necessary to obtain a collection of data expressed in a consistent manner. Once preprocessed, log data may be used as input to the pattern discovery process. In the preprocessing step, two primary activities are executed: data filtering and user session identification. An extensive description of data preparation and preprocessing methods can be found in [16]. In the sequel, we focus on the activities of data filtering and user session identification. Since Web log files collect all the interactions between a Web site and its users, they may also comprise useless information and a large amount of noise that needs to be filtered out. Useless records in log files are mainly due to the HTTP protocol which executes a separate access request for every file, image, multimedia objects embedded in the Web page which is requested by the user. In this way, a single user request for a Web page may often result in several log entries that correspond to files automatically downloaded without an explicit request by the user.

256

G. Castellano et al. / Computational Intelligence techniques for Web personalization

Fig. 1. The general scheme of a Web personalization system.

Records corresponding to these requests do not represent the effective browser activity of the connected user, hence they have to be removed. Elimination of these items can be accomplished by checking the suffix of the URL name, depending on the type of the site being analyzed. Records corresponding to failed user request, are also filtered. Finally, data filtering should remove records generated by Web robots,1 that are not considered representative of the user browsing behavior. Web robot sessions are detected in different ways: by examining sessions that access a specially formatted file called robots.txt, by exploiting the User Agent field of log files wherein most crawlers identify themselves, or by matching the IP address of sessions with those of known robot clients. 1 Web robots, also known as Web crawlers or Web spiders, are programs which traverse the Web in a methodical and automated manner, downloading complete Web sites in order to update the index of a search engine.

After data filtering, the cleaned log data are analyzed in order to identify user sessions embedding the user browsing behavior on a specific Web site. A user session can be defined as a limited set of URLs corresponding to the pages visited by a user from the moment the user enters a Web site to the moment the same user leaves it [100]. Hence the problem of user session identification is strictly related to the problem of identifying a single user. On this subject, various methods have been proposed to automatically recognize a user. Among these, the simplest and also the mostly employed approach consists of assigning a user to each different IP address present in log files [73,100]. This method is not very accurate because, for example, a visitor may access the Web from different computers, or many users may use the same IP address (if a proxy is used). Other Web usage mining tools use more accurate approaches for a priori identification of unique visitors such as Cookies that store on the client machine, information about the user generated by the server whenever a user visits the Web site [44].

G. Castellano et al. / Computational Intelligence techniques for Web personalization

Assuming a user has been identified, the click stream of each user is divided into sessions. To do this, different approaches have been proposed, which can be grouped in two categories: time-based and contextbased methods [98]. In time-based methods, the usual solution is to set a minimum timeout and assume that consecutive accesses within it belong to the same session, or set a maximum timeout, where two consecutive accesses that exceed it belong to different sessions. Context-based methods consider the access to specific kinds of pages or they define conceptual units of work to identify the user sessions. Here, transactions are recognized where a transaction represents a subset of pages that occur in a user session. Based on the assumption that transactions depend on the contextual information, Web pages are classified in: auxiliary, content and hybrid pages. Auxiliary pages contain links to other pages of the site; content pages contain the information interesting for the user and, finally, the hybrid pages are of both previous kinds of pages. Starting from this classification, Cooley et al. [15] have distinguished content-only transactions from the auxiliary-content transactions. The first ones include all the content pages visited by the user whereas the second ones refer to the paths to retrieve a content page. Several methods have been developed to identify transactions, but none of them is without problems [79,80,82,94]. 3.2. Pattern discovery Once Web data have been preprocessed, the next step consists in discovering patterns of usage of the Web site through the application of Web usage mining techniques. To achieve this aim, methods and algorithms belonging to several fields such as statistics, data mining, machine learning and pattern recognition are applied to discover useful knowledge for the ultimate personalization process. Most of the commercial applications commonly derive knowledge about users by executing statistical analysis on session data. Many Web mining traffic tools produce periodic reports including important statistical information descriptive of the user browser patterns, such as the most frequently accessed pages, average view time, average length of navigational paths, and so on. This kind of extracted knowledge may be useful to improve the system performance, to facilitate the site modification task, and so on. In the context of knowledge discovery techniques specifically designed for the analysis of Web usage

257

data, research effort mainly focused on three distinct paradigms: association rules, sequential patterns and clustering. Han et Kamber [34] give an exhaustive review of these techniques. The most straightforward technique employed in Web usage mining is represented by association rules explaining associations among Web pages which frequently appear in user sessions. This kind of approach has been used in Joshi et al. [42], Nanopoulus et al. [69], while some measures to evaluate association rules mined from Web usage data have been proposed by Huang et al. [40]. Fuzzy association rules, obtained by the combination of association rules and fuzzy logic, have been extracted in Wong and Pal [106]. Sequential pattern discovery turns out to be particularly useful for the identification of navigational patterns in Web usage data. In this kind of approach, the element of time is introduced in the process of discovering patterns which frequently appear in user sessions. To extract sequential patterns, two main class of algorithms are employed: methods based on association rule mining and methods based on the use of tree structures and Markov chains. Some well-known algorithms for mining association rule have been modified to obtain sequential patterns. For example the Apriori algorithm has been properly extended to derive two new algorithms: the AprioriAll and GSP proposed in Huang et al. [40] and Mortazavi-Asl [67]. An alternative algorithm based on the use of a tree structure has been presented in Pei at al. [80]. Tree structures have been also used in Menasalvas et al. [61]. Clustering is the most widely employed technique in the pattern discovery process. Clustering techniques look for groups of similar items among large amount of data based on a general idea of distance function which computes the similarity between items. Vakali et al. [104] provided an exhaustive overview of Web data clustering methods used in different research works. Following the classification suggested by the same Vakali, in Web usage domain, two kinds of interesting clusters can be discovered: usage clusters and Web document clusters. Xie and Phoha [107] were the first to suggest that the focus of Web usage mining should be shifted from single user sessions to group of user sessions. Successively, in a large number of works, usage clustering techniques have been used in the process of Web Usage Mining to group together similar sessions [5,36,40]. Clustering of Web documents aims to discover groups of pages having related content. In general, a Web document can be considered as a collection

258

G. Castellano et al. / Computational Intelligence techniques for Web personalization

of Web pages (a set of related Web resources, such as HTML files, XML files, images, applets, multimedia resources and so on). In this framework, the Web topology can be regarded as a directed graph, where the nodes represent the Web pages with URL addresses and the edges among nodes represent the hyperlinks among Web pages. In this context, the concepts of compound documents [22] and logical information units [101] have been introduced. A compound document is a set of Web pages having the fundamental property that their link graph has to contain a vertex corresponding to a path conducting to every other part of the document. Moreover, a Web community is defined as a set of Web pages that link to more Web pages in the community than to pages outside of the community [32]. The main benefits derived by clustering include: increasing Web information accessibility, understanding users’ navigation behaviour identifying user profiles, improving information retrieval in search engines and content delivery on the Web. 3.3. Personalization The knowledge extracted through the process of knowledge discovery has to be exploited in the personalization process. Personalization functions can be accomplished in a manual or in an automatic and transparent manner for the user. In the first case, the discovered knowledge has to be expressed in a comprehensible manner for humans, so that knowledge can be analyzed to support human experts in making decisions. To accomplish this task, different approaches have been introduced in order to provide useful information for personalization. An effective method for presenting comprehensive information to humans is the use of visualization tools such as WebViz [83] that represents navigational patterns as graphs. Reports are also a good method to synthesize and to visualize useful statistical information previously generated. Personalization systems such as WUM [97] and WebMiner [17] use SQL-like query mechanisms for the extraction of rules from navigation patterns. Nevertheless, decisions made by the user may create delay and loss of information. As a consequence, a more interesting approach consists of the integration of Web usage mining in the personalization process. In particular, the extracted knowledge from Web data is automatically exploited in a personalization process which adapts the Web-based system according to the discovered patterns. Knowledge will be delivered sub-

sequently to the users by the means of one or more of the personalization functions. Following the scheme of a general Web usage based personalization system, this phase is included in the online module aimed to realize the personalization functionality. All the other steps involved in the Web personalization system, i.e. Web data preprocessing and pattern discovery, are periodically performed in the offline module.

4. Computational Intelligence for Web personalization The term CI indicates a consortium of methodologies that work synergistically to find approximate solutions for real-world problems which contain various kinds of inaccuracies and uncertainties. The guiding principle is to devise methods of computation that lead to an acceptable solution at low cost by seeking for an approximate solution to an imprecisely/precisely formulated problem. Computational paradigms underlying CI are Neural Computing (NC), Fuzzy Logic Computing (FL) and Evolutionary Computing (EC), where NC supplies the machinery for learning and modeling complex functions, FL gives mechanisms for dealing with imprecision and uncertainty underlying real-life problems, and EC provides algorithms for optimization and searching. Systems based on such paradigms are artificial Neural Networks (NN), Fuzzy Systems (FS), and Evolutionary Algorithms (EA). Rather than a collection of different paradigms, CI is better regarded as a partnership in which each of the partners provides a methodology for addressing problems in a different manner. From this perspective, CI methodologies are complementary rather than competitive. This relationship enables the creation of hybrid computing schemes which use NN, FS and EA in combination. Figure 2 shows different possibilities of integration between CI paradigms. Among these, the neuro-fuzzy combination is today the most visible one [57,76]. In the last few years, the relevance of CI methodologies to Web personalization tasks has drawn the attention of researchers, as indicated in a recent review [27]. Indeed, CI can improve the behavior of Web-based applications, as both imprecision and uncertainty are inherently presented in the Web activity. Web data, being unlabeled, imprecise, incomplete and heterogeneous, appear to be a good candidate to be mined in the CI framework. Besides, CI seems to be the most appropriate paradigm in Web mining where, being human interaction a key component, issues such

G. Castellano et al. / Computational Intelligence techniques for Web personalization

259

Fig. 2. Possible integrations between Computational Intelligence paradigms.

as approximate queries, deduction, personalization and learning have to be faced. From this perspective, the CI community has addressed the attention toward the growth of a novel research area known as Computational Web Intelligence (CWI) [112]. The rationale behind CWI is that CI methodologies, being complementary rather than competitive, can be successfully employed in combination to develop intelligent Web personalization systems. In this context, NN with self organization abilities are typically used for pattern discovery and rule generation. FL is used for handling issues related to incomplete/imprecise Web data mining, understandability of patterns and explicit representation of Web recommendation rules. EA are mainly used for efficient search and retrieval of documents and information on the Web. In the following subsections, we give an overview of existing CI techniques and mention examples of Web personalization tasks implemented in the CI framework. 4.1. Neural computing in Web personalization Neural Networks are computational models, that are loosely modeled on biological systems and exhibit some of the properties of the biological neurons. They are composed by a number of simple processors (neurons) working in parallel, without any centralized control. The neurons are arranged in a particular structure which is usually organized in layers [35]. NN are commonly regarded as learning machines that work on the basis of empirical data. The only means of acquiring

knowledge about the world in a connectionist system comes from observational instances. There are no a priori conceptual patterns that could lead to a learning process. These characteristics make NN powerful tools to extract user models from Web usage data. As an example, in [95], neural network agents are used for learning user profiles with training data collected from users. Bidel, Lemoine and Piat [7] use a neural network to classify user navigation paths. In Seo and Zhang [93], a neural approach based on reinforcement learning is presented that learns user preferences implicitly from direct observations of browsing behaviors during interactions. Competitive learning schemes can be also used to mine access patterns from Web log data and discover association rules between URL pages, as in Menon and Dagli [62] and Dong et al. [19]. Once the user profiles have been learned, a NN can be used to determine whether the users would be interested in another page and make suggestions about pages to visit quickly. As an example, Nasraoui and Pavuluri [74] propose a recommendation system that suggests relevant URL by relying on a committee of profile-specific neural networks. To perform recommendation, unsupervised NN like Self Organized Maps (SOM) have been extensively used. Roh, Oh and Han [86] used a SOM to create a recommendation system for movies. Similarly, Changchien and Lu [13] used SOM to create a recommendation system for e-commerce. In Golovin and Rahm [29], a rule-based recommendation system is proposed where reinforcement learning is applied to

260

G. Castellano et al. / Computational Intelligence techniques for Web personalization

continuously evaluate the user acceptance of presented recommendations and to adapt the recommendations to reflect the user interests. Neural networks have also been applied to perform a particular form of Web personalization, known as Personalized Rage Ranking. Since human beings find it difficult to scan through the entire list of documents returned by the search engine in response to his/her query, it is desirable to get the pages ranked with respect to “relevance” to user queries so that one can get the desired documents only by scanning the first few pages. In the computation of page ranks, several factors should be taken into account, such as the popularity of a page (reputation of incoming links), the richness of information content (number of outgoing links), the user preference (whether the link matches with the preferences of the user, established from his/ her history). Since NN can model nonlinear functions and learn from examples, they appear to be a good candidate for computing the “relevance” of a page, as demonstrated in Scarselli et al. [89], where a neural network model, capable of processing general types of graph structured data, is applied to compute customized page ranks in the Web. Despite the numerous successful applications, the Neural Computing paradigm still faces some important limitations in developing Web personalization systems. The main one is related to the difficulty to learn models dynamically. Actually, when more information becomes available (e.g. a new Web document is added or a new user accesses to the Web site) the NN is usually retrained from scratch. More research in the field of incremental NN learning is needed. Another drawback of NN is that the knowledge extracted by learning is not easily interpretable. As a consequence, when human-understandable user models are needed, the application of NN is avoided. To overcome such limitation, the NN paradigm is usually integrated with the FL paradigm, that offers an explicit knowledge representation in terms of linguistic rules. The most common form of NN-FL integration is represented by Neurofuzzy systems, that can be fruitfully adopted to learn user profiles form Web data and discover association rules in a comprehensible form, as described in Section 4.4. 4.2. Fuzzy logic computing in Web personalization Fuzzy Logic defines a framework in which the inherent ambiguity of real-world data can be captured, modeled and used to reason under uncertainty. The

FL theory is based on the key concept of fuzzy set which expresses the degree of membership of an element in that set. This degree can take continuous values between [0,1]. This characteristic allows capturing the uncertainty inherent in real data. An introduction to FL can be found in Klir and Yuan [47] and Yan et al. [109]. The application of FL, so far made, to Web personalization tasks mainly falls under the tasks of user profiling and recommendation [110,111]. Fuzzy Logic is often used in combination with clustering algorithms in order to produce user profiles that capture the uncertainty of the user behavior. Indeed, Web log data are inherently noisy and fuzzy, because the browsing behavior on Web is highly uncertain. Hence, in order to generate real Web usage profiles, it is better to apply fuzzy rather than crisp clustering techniques. Unlike crisp clustering, where each data point belongs to only one cluster, fuzzy clustering finds overlapping clusters, so that each data point can belong to different clusters with various membership degrees. Fuzzy clustering has been widely used for mining Web access logs in order to derive usage clusters. In Joshi and Krishnapuram [43], a fuzzy clustering technique for Web log data mining is described. Here, an algorithm called competitive agglomeration of relational data (CARD) for clustering user session is described, which considers the structure of the site and the URLs for computing the similarity between two user sessions. This approach requires the definition and computation of dissimilarity or similarity between all session pairs, forming a similarity or fuzzy relation matrix, prior to clustering. Since the data in a Web session involves access method (GET/ POST), URL, transmission protocol (HTTP/FTP), and so on, which are all nonnumeric, correlation between two user sessions and, hence, their clustering, is best handled using a fuzzy set approach. A similar approach is presented in [71], where an unsupervised relational clustering algorithm based on CARD is applied for extracting Web user profiles. Many other fuzzy relational clustering algorithms have been used for mining Web usage profiles. Among these, we mention the fuzzy c-Trimered Medoids Algorithm [50], the Relational Fuzzy C-Maximal Density Estimator (RFCMDE), the Fuzzy c-Medoids (FCMdd) algorithm [72], and the Relational Fuzzy Subtractive clustering algorithm [100]. All these fuzzy relational clustering algorithms are variants of the original Fuzzy C-Means algorithm introduced by Bezdek [6] that performs clustering on relational data representing the similarity or

G. Castellano et al. / Computational Intelligence techniques for Web personalization

dissimilarity values between each pair of sessions. In this context, FL can be also conveniently used to define similarity measures that evaluate the distance between two URLs. This may be useful to understand which two URLs are always requested together or which users have common interests and request similar documents. Hence, session similarity can be in turn expressed in terms of URL similarity. Fuzzy clustering algorithms have also been employed to perform clustering of Web documents in order to discover groups of pages having related content [24]. This requires the ability to handle overlapping data, other than snippet tolerance, speed and incremental characteristics. In Krishnapuram et al. [50], the fuzzy c-medoids (FCMdd) and fuzzy c-Trimmed medoids (FCTMdd) are used for clustering Web documents and snippets. In other Web applications, a FL inference mechanism is used to implement a personalization engine with the ability to mix user profiles that are similar to a certain degree. An example of fuzzy inference system used for recommendation is proposed in Nasraoui and Petenes [73], where user profiles are obtained through a hierarchical unsupervised clustering. In Ardissono and Goy [4] fuzzy logic is used both to derive profiles that model user behavior and to provide recommendations using these fuzzy profiles. Although, strictly speaking, there is no actual fuzzy inference involved, the user prototypes are modeled using membership functions, and the recommendation process is done using a fuzzy AND operator. Schmitt, Dengler, and Bauer [91] present a system designed to recommend products in an e-commerce site, according to how well this product satisfies user preferences. The score of an item (according to how much that item matches user interests) is done using an OWA (Ordered Weighted Averaging) operator. This family of operators allows the representation of fuzzy logic connectives and the aggregation of different user preferences. Fuzzy Logic is a suitable computational paradigm to accomplish all personalization tasks that involve some inference mechanism from recommendation rules. Nevertheless, FS possesses no mechanism for learning from data, hence they require combination with other paradigms, like NN, that enable automatic knowledge extraction from Web data. Neuro-fuzzy systems, which will be discussed in Section 4.4, have emerged as an approach to exploit in a single system advantages of both FS and NN.

261

4.3. Evolutionary computing in Web personalization Evolutionary Computing is a general term for indicating a number of computational strategies that are based on the principle of evolution and natural selection, employed in optimization problems to find near optimal solutions [20]. An Evolutionary Algorithm is an iterative search probabilistic algorithm, that begins with a set of candidate solutions, called population. Given a quality function to be maximized, at each iteration some of the better candidates are chosen to seed the next population by applying recombination and/or mutation to them. Recombination of two selected candidates provides one or more new candidates. Mutation of a candidate results in a new candidate. The resulting new candidates compete, based on their fitness, with the old ones for a place in the new population. This process is iterated until a candidate with sufficient quality (solution) is found. There are many variants of Evolutionary Algorithms. The most common ones are Genetic Algorithms (GA) [28] and Evolutionary Strategies (ES) [92]. These techniques are based on the same evolutionary process described above and differ only in technical details, such as the representation of the candidate solutions, represented by strings over a finite alphabet in GA, and by real-valued vectors in ES. A natural employment of EA in Web personalization concerns the search and retrieval of relevant items for the user. A GA-based search to find other relevant homepages, given some user-supplied homepages, has been implemented in G-Search [18]. Web document retrieval by genetic learning of importance factors of HTML tags has been described in [46]. In [77] a GAbased agent collects and evaluates new HTML pages from the Web, using information included in examples provided by the user; pages that score high are served to the user by the GA agent. Gordon et al. [30] use genetic programming to automatically evolve new retrieval algorithms based on a user’s evaluation of previously viewed documents. In addition, GA and EA may be used for prediction of user preferences. Commonly, they are used for Web recommendation to derive rules that capture user goals and preferences. Examples of this approach are in [87] for student modeling, in [63] for profiling of e-commerce customers and in [56] for capturing users preferences for improvement of Web searches. GA have also been applied for filtering [26] and for classification, as in [96]. Another Web personalization task commonly accomplished by means of EA is dynamic query opti-

262

G. Castellano et al. / Computational Intelligence techniques for Web personalization

mization. In [38] GA are applied to query optimization in document retrieval. Boughanem et al. [8] developed a query reformulation technique using GA, in which a GA generates several queries that explore different areas of the document space and determines the optimal one. Automatic Web page categorization and updating can also be performed using GAs [59]. Evolutionary computing techniques can be also applied to develop personalized Web services, by automatically selecting optimal combinations of Web service components from available component repositories, as in [12]. On the overall, GA and EA are suitable for searching vast, complex, and multimodal problem spaces such as the Web space. They may have some limitations with respect to their computational complexity. 4.4. Hybrid CI techniques in Web personalization To a large extent, the key-points and the shortcomings of CI paradigms appear to be complementary. Therefore, it is a natural practice to build up integrated strategies combining the concepts of different CI paradigms to overcome limitations and exploit advantages of each single paradigm [37,103]. Various examples of combination between CI techniques can be found in the literature concerning Web personalization applications, ranging from very simple combination schemas to more complicated ones. An example of simple combination is in [53], where user profiles are derived by a clustering process that combines a fuzzy clustering (the Fuzzy c-Means clustering) and a neural clustering (using a Self-Organising Map). A more complex form of hybridization using all the three CI paradigms together can be found in [52] to design a recommendation system for electronic commerce using fuzzy rules obtained by a combination of fuzzy neural networks and genetic algorithms. Here, FL has also been used for filtering. In this case, FL provides a soft filtering process based on the degree of concordance between user preferences and the elements being filtered. Among the multitude of hybrid strategies proposed in literature that involve NN, FL and EA, it is straightforward to indicate Neuro-Fuzzy (NF) systems as the most prominent representatives of hybridizations in terms of the number of practical implementations in several application areas. NF systems use NN to learn and fine tune rules or membership functions from input-output data to be used in a FS [64]. With this approach, the main drawbacks of NN and FL, i.e. the black box behavior of NN and the lack of learning

mechanism in FS are avoided. NF systems automate the process of transferring expert or domain knowledge into fuzzy rules, hence they are basically FS with an automatic learning process provided by NN, or NN provided with explicit form of knowledge representation given by FL. As a consequence, NF techniques are especially suited for Web personalization tasks where knowledge interpretability is desired. One of these tasks is the extraction of association rules for recommendation. A lot of NF approaches have been developed to derive fuzzy recommendation rules. In [31], fuzzy association rules understandable to humans are learnt from a database containing both quantitative and categorical attributes by using a neuro-fuzzy approach like the one proposed by Nauck [75]. Lee [55] uses a NF system for recommendation in an e-commerce site. Stathacopoulou et al. [99] and Magoulas et al. [60] use NF to implement a classification or recommendation system with the purpose of adapting the contents of a Web-course according to the model of the student. Recently, Castellano et al. [10] proposed a Web personalization approach that uses a neuro-fuzzy system to learn fuzzy rules for dynamic link recommendation (the next section is devoted to outline the main features of this approach, in order to give an example of how different CI tools can be used synergistically to perform Web personalization). In general, all these NF approaches to Web personalization exhibit the positive aspects of NN and FL, nevertheless they still maintain some of the limitations of both approaches, mainly difficult application for dynamic modeling, i.e. they are not very suitable for personalization task that need to change a user model onthe-fly.

5. A Web personalization system based on CI techniques In this section we provide an example of a hybrid Web personalization approach using different CI techniques as tools for Web Mining. Specifically, we describe a neuro-fuzzy strategy to develop a Web recommendation system which heavily uses CI techniques. In such a framework, a fuzzy clustering algorithm is applied to group preprocessed Web usage data into session categories. Then, a hybrid approach based on the combination of the fuzzy reasoning with a neural network is employed in order to derive fuzzy rules useful to provide dynamic predictions about Web pages to be suggested to the current user, on the basis of the

G. Castellano et al. / Computational Intelligence techniques for Web personalization

session categories previously identified. According to the general scheme of a Web personalization process described in Section 3, three different phases can be distinguished in this approach: – preprocessing of Web log files to extract useful data about URLs visited during user sessions; – pattern discovery to derive session categories and to discover associations between session categories and URLs to be recommended; – personalization, which uses the knowledge extracted through the previous phases to dynamically recommend interesting URLs for the current user. Two major modules can be distinguished in the system: an offline module that performs the first two phases in order to extract knowledge from Web usage data, and an online module that recommends interesting Web pages on the basis of the discovered knowledge. During the preprocessing task, user sessions are extracted from the log files which are stored by the Web server. Next, a fuzzy clustering algorithm is executed on these records to group similar sessions into session categories. Finally, starting from the extracted categories and the available data about user sessions, a knowledge base expressed in the form of fuzzy rules is extracted via a neuro-fuzzy learning strategy. Such knowledge base is exploited during the recommendation phase (performed by the online module) to dynamically suggest links to Web pages judged interesting for the current user. Specifically, when a user requests a new page, the online module matches his current partial session with the session categories identified by the offline module and derives the degrees of relevance for URLs by means of a fuzzy inference process. In the following, we describe in more detail all the tasks involved in the Web personalization process. 5.1. Web data preprocessing The aim of the preprocessing step is to identify user sessions starting from the information contained in the access log file. Since sessions encode the navigational behavior of the users, their identification covers an important role for the success of a personalization system. A user session can be defined as a limited set of pages accessed by the same user within a particular visit. We consider a user session as the set of accesses originating from the same IP address within a predefined time period. Such time period is defined as the

263

maximum elapsed time between two consecutive accesses. Supposing that the Web site is composed of n pages, each URL is assigned to a unique number j = 1, . . . , n. Thus, a user session is represented by an n-dimensional vector where the j-th element expresses the degree of interest of the user for the j-th Web page. The degree of interest for a Web page can be defined in different ways. One of the possibilities is to represent it by the amount of time the user spends on a page, that is estimated from the time difference between two consecutive accesses. This approach seems reasonable, since it tends to weight content pages higher. However, as observed in [108], a long access can completely obscure the importance of other relevant pages. Another possibility is to define interest degrees by the number of times a page was visited during the navigation. In this work, we define the degree of interest to a URL in terms of both the time the user spends on the page and the frequency of accesses to the page (number of accesses to that page or total number of accesses during the session). Formally, the i-th user session is (i) (i) (i) represented by a vector s(i) = (s1 , s2 , . . . , sn ) (i) (i) (i) (i) with sj = fj tj for j = 1, . . . , n, where fj and (i)

tj indicate, respectively, the access frequency and the time fraction spent by the user on the j-th page referred to the total duration of the i-th session. Summarizing, after the preprocessing phase, a collection of N sessions s(i) is identified from the log data. 5.2. Pattern discovery Once the first step of Web log files preprocessing has been completed, the successive step of pattern discovery begins. The major aims of this phase are: – the identification of patterns which describe the interests and the navigational behavior of users; – the extraction of a knowledge base which contains associations between user sessions and URLs to be recommended. To achieve these aims, two main activities are executed: – categorization of user sessions; – discovery of associations. More precisely, starting from the user sessions identified in the previous step, a fuzzy clustering process is performed in order to categorize user sessions. Next, the user sessions and the derived categories are exploited in order to extract a knowledge base useful for

264

G. Castellano et al. / Computational Intelligence techniques for Web personalization

the successive recommendation step. Knowledge extraction is performed by the application of a neurofuzzy model which provides a fuzzy rule base, where each rule represents the associations between user sessions and URLs to be recommended. In the next sections, categorization of user sessions and discovery of associations are described in more detail. 5.2.1. Categorization of user sessions Once user sessions have been identified, a clustering process is applied in order to group similar sessions into categories. Each session category includes accesses made by users exhibiting a common browsing behavior and hence similar interests. The identified session categories will be successively exploited for suggesting links to pages considered interesting for the user. In this work, the well-known Fuzzy C-Means (FCM) clustering algorithm [6] is applied in order to group user sessions in overlapping categories. Briefly, the FCM algorithm finds C clusters based on the minimization of the following objective function: Fα =

C N  

 (i) 2  mα − p(c)  , ic s

1α∞

i=1 c=1

where α is any real number greater than 1, mic is the degree of membership of the session vector s(i) to the c-th cluster, p(c) is the center (prototype vector) of the c-th cluster. The FCM algorithm works as follows: (0) 1. Initialize M = [mic ]c=1...C i=1...N matrix, M 2. At τ -th step: calculate the prototype vectors p(c) , c = 1 . . . C as N α (i) (c) i=1 mic s p =  N α i=1 mic

3. Update M(τ ) according to: mic =

C k=1



1 s(i) −p(c)  s(i) −p(k) 

2  α−1

4. If M(τ ) −M(τ −1)  < ε with 0 < ε < 1, STOP; otherwise return to step 2. As a result, FCM provides: – C cluster prototypes represented as vectors p(c) = (c) (c) (c) (p1 , p2 , . . . , pn ) with c = 1 . . . C – a fuzzy partition matrix M = [mic ] i=1,...,N c=1,...,C where each value mi,c represents the membership degree of the i-th session to the c-th cluster.

Summarizing, the clustering phase mines a collection of C session categories from session data. Each category describes the typical navigational behavior of a group of users with similar interests about the pages of the Web site. It should be noted that if a user has a complete access to the Web site, i.e. he visits all the Web pages, then the corresponding session is assigned to a fuzzy category (cluster) whose prototype vector contains all non-zero elements. The values of such elements depend on the time spent on each page and the access frequency to each page. 5.2.2. Discovery of associations for recommendation Session data and session categories identified by fuzzy clustering are employed to extract associations between user sessions and URLs to be recommended. Such associations represent the knowledge base to be used by the online personalization module (see Fig. 2). The discovery of associations is performed through the learning of a neuro-fuzzy network, i.e. a neural network that encodes in its topology the structure of a Fuzzy Inference Systems (FIS) consisting of three conceptual components: (i) a rule base, which contains a number of fuzzy rules; (ii) a database, which defines the membership functions of fuzzy sets used in the fuzzy rules and (iii) a reasoning mechanism, which performs the inference procedure upon the rules and given data to derive a reasonable output. In our case, each rule in the FIS expresses a fuzzy relation between a user session s = (s1 , . . . , sn ) and URLs to be recommended in the following form: IF (s1 is A1k ) AND . . . AND (sn is Ank ) THEN (relevance of URL1 is b1k ) AND . . . AND (relevance of URLn is bnk ) for k = 1 . . . C where C is the number of rules,2 Ajk (j = 1, . . . , n) are fuzzy sets with Gaussian membership functions defined over the input variables sj , and bjk are fuzzy singletons expressing the amount of recommendation (relevance degree) of the j-th URL. The main advantage of using a fuzzy knowledge base for the recommendation system is readability of the extracted knowledge. Actually, fuzzy rules can be easily understood by human users since they can be expressed in a linguistic fashion by labeling fuzzy sets 2 The number of rules is equal to the number of session categories identified by fuzzy clustering. Indeed, each fuzzy rule associates relevance degrees for URLs to each session category.

G. Castellano et al. / Computational Intelligence techniques for Web personalization

Ajk with linguistic terms such as LOW, MEDIUM, HIGH. Hence, a fuzzy rule for recommendation can assume the following linguistic form: IF (the degree of interest for URL1 is LOW) AND . . . AND (the degree of interest for URLn is HIGH) THEN (recommend URL1 with relevance 0.3) AND . . . AND (recommend URLn with relevance 0.8) The derivation of if-then rules and corresponding membership functions depend heavily on the a priori knowledge about the system under consideration. However there is no systematic way to transform the expert knowledge into the knowledge base of a FIS. On the other hand, neural network learning mechanism does not rely on human expertise. Therefore, it is natural to consider building an integrated system combining the concepts of FIS and NN learning, called neuro-fuzzy network. In this work, the fuzzy rulebased model is implemented with a three-layer feedforward neural network which reflects the fuzzy rule base in its parameters and topology. In particular, the three layers of the neuro-fuzzy network compute, respectively: – membership degree to fuzzy sets; – fullfillment degree for each fuzzy rule; – inferred output. Units in the first layer L1 receive the degrees of interest to visited pages in a session (s1 , s2 , . . . , sn ) and evaluate the Gaussian membership functions representing fuzzy sets. In this layer, units are arranged in C groups, one for each fuzzy rule (the number of fuzzy rules is equal to the number of session categories discovered by fuzzy clustering). The k-th group contains n units corresponding to the fuzzy sets which define the premise part of the k-th rule. In detail, each unit ujk ∈ L1 receives the interest degree for the j-th page sj , j = 1 . . . n and computes its membership value to fuzzy set Ajk as follows:   (sj − xjk )2 (1) Ojk = exp − , j = 1, . . . , n 2 σjk k = 1, . . . , C where xjk and σjk are the center and the width of the Gaussian function, representing the adjustable parameters of unit ujk ∈ L1 .

265

The second layer L2 contains C units that compute the fulfillment degree of each rule. In this layer, no modifiable parameter is associated with the units. The output is derived by computing the rule activation strength, as follows: (2)

Ok =

n

(1)

Ojk ,

j = 1, . . . , n

j=1

The third layer L3 provides the outputs of the network, i.e. the relevance values of the n URLs to be used for recommendation. Each relevance value is obtained by inference of rules, according to the following formula: C (3) (3) k=1 Ok bjk , j = 1, . . . , n Oj =  (3) C k=1 Ok Connections between layer L2 and L3 are weighted by the fuzzy singletons bjk that represent a set of free parameters for the neuro-fuzzy network. In order to learn recommendation fuzzy rules, the neuro-fuzzy network is trained on a set of input-output samples describing the association between user sessions and preferred URLs. Precisely, each training sample should describe the association between a user session (as described in Section 5.1, a user session is actually a vector of interest degrees for visited URLs during the session), and the amount of recommendation (relevance degree) for each URL. Thus the training set is a collection of N input-output vectors:

T = (s(i) , r(i) ) i=1...N where the input vector s(i) represents the i-th user session identified in the preprocessing phase, and the output vector r(i) expresses the amount of URLs recommendation for the i-th user session. To compute the values in r(i) , information embedded in the discovered session categories is exploited. Precisely, for each session s(i) , we consider its membership to the C clusters (session categories) expressed by membership values {mi,c }c=1...C in the partition matrix M. Then, the m top-matching session categories are identified as those with membership values higher than a threshold.3 The (i) (i) values in the output vector r(i) = (r1 , . . . , rn ) are L (i) (cl ) for j = hence calculated as rj = l=1 mi,cl pj (c )

1, . . . , n and i = 1, . . . , N , where pj l is the j-th component of the prototype vector for the top-matching cluster cl . 3 The value of this threshold can be fixed experimentally. In this work it is fixed to 0.5.

266

G. Castellano et al. / Computational Intelligence techniques for Web personalization

Once the training set has been constructed, the neuro-fuzzy network enters the learning phase to extract the knowledge embedded into the training set and to represent it as a collection of fuzzy rules. The neurofuzzy learning is articulated in two steps. The first step is based on unsupervised learning of the neural network, which provides a clustering of the session data and the definition of an initial fuzzy rule base. In this step, the structure and the parameters of fuzzy rules are identified. Successively, the obtained knowledge base is refined by a supervised learning process. Here, fuzzy rule parameters are tuned via supervised learning to improve the accuracy of the derived knowledge. Major details on the algorithms underlying the neuro-fuzzy learning strategy can be retrieved in [11]. 5.3. Personalization The online recommendation module performs the ultimate task of personalization, i.e. suggesting the URLs of the Web site that are judged relevant for the current user. Specifically, when a new user accesses the Web site, the module matches his current partial session against the fuzzy rules obtained offline and derives a vector of relevance degrees by means of a fuzzy inference process. Formally, based on the set of C rules generated through the knowledge extraction procedure described above, the recommendation module provides URL relevance degrees for a new user session s0 by means of the following fuzzy reasoning procedure: Calculate the matching degree of current session s0 to the k-th rule, for k = 1 . . . C by means of product n operator: μk (s0 ) = j=1 μjk (s0j ). Calculate the relevance degree rj0 for the j-th URL as: C (0) ) k=1 rjk μk (s , j = 1...n rj0 =  C (0) ) k=1 μk (s When a new user visits the Web site, the system creates an active session in the form of a vector. Each time the user requests a new page, the vector is updated. To maintain the active session, a sliding window is used to capture the most recent user’s behavior. Thus the partial active session is represented as a vector s0 = (s01 , . . . , s0n ) where some values are equal to zero, corresponding to unexplored pages. In order to perform dynamic link suggestion, the recommendation module firstly identifies URLs not yet visited by the current user, i.e. all pages such that s0j = 0. Then, among unexplored pages, only those having a

relevance degree rj0 greater than a threshold α are recommended to the user by dynamically including a list of links to their URLs in the page currently visited by the user. 5.4. Simulation results In this section we show the application of the neurofuzzy Web recommendation approach on a sample Web site. Session data were obtained by preprocessing a log file covering a time period of two weeks. The data filtering process selected 10 mostly visited Web pages. For the sake of brevity, we indicate these pages as P1, P2, . . . , P10. Starting from these data, a total number of 62 user sessions were identified, setting to 30 min the maximum elapsed time between two consecutive accesses from the same IP address. Next, the FCM algorithm was applied in order to obtain session categories. Carrying out different tests, we identified C = 6 as the best number of categories (clusters). Indeed, we observed that setting an higher number of clusters (i.e. C = 8 or C = 10) we obtained various prototype vectors with similar values. This demonstrated that 6 clusters were enough to model all the categories existing in the available data. On the other hand, setting a number below 6, we risked to leave out interesting categories. The information about the session categories obtained by clustering are summarized in Table 1. In particular, for each category (labeled with numbers 1, 2, . . .) the pages with the highest degree of interest are indicated. In the last column, common access pages characterizing the category are reported. The information about the user sessions were used to create a dataset of 62 input-output samples in the way described in Section 5.2.2. Starting from the created dataset, a training set and a test set were derived by specifying a percentage (80%, in this case) of the total number of sessions. Once the training set was composed, we applied the neuro-fuzzy strategy to derive the fuzzy rule base containing the associations between user sessions and relevancies of URLs. Particularly, a neural network with 10 inputs (corresponding to the pages of the sessions) and 10 outputs (corresponding to the relevance values of the Web pages) was considered. The network was trained until the error on the training set dropped below 0.01. The derived fuzzy rule base was finally used to infer the relevance degree of each URL for the current user. On the basis of the relevance degrees obtained through the fuzzy inference process, it was possible

G. Castellano et al. / Computational Intelligence techniques for Web personalization

267

Table 1 Session categories extracted by FCM Session category

Interest degrees

Common access pages

1 2 3

P1 = 0.86, P8 = 0.82 P2 = 0.86, P9 = 0.80 P4 = 0.88, P7 = 0.85, P10 = 0.83

{P1, P8} {P2, P9} {P4, P7, P10}

4 5 6

P1 = 0.82, P9 = 0.80 P3 = 0.87, P6 = 0.84, P9 = 0.76 P5 = 0.89, P10 = 0.84

{P1, P9} {P3, P6, P9} {P5, P10}

to suggest a list of links to unexplored pages retained the most interesting to the user. The list of recommended pages was derived by following the procedure described in Section 5.3. In order to evaluate the effectiveness of the fuzzy knowledge base used for recommendation, we performed an evaluation sequence organized as follows. Web pages visited in each session of the test set were divided randomly into input set (Ia) and measurement set (Ma). The input set was treated as active session and it was given in input to our recommendation process to determine the recommended pages or recommended set (Ra). In this way, each complete session of the test set was treated as ground-truth, a subset of this session (the input subset) was treated as incomplete current sub-session and the recommendations were treated as predicted complete session. This process is very similar to an information retrieval process. Hence, to measure the accuracy of the recommendations provided by the neuro-fuzzy system, we used three metrics typically used within the information retrieval community, namely precision, recall and F1 measure. The recommendation precision represents the percentage of returned links that are relevant. It is defined as: Precision =

|Ma ∩ Ra | |Ra |

Recall indicates the percentage of relevant links that are returned. It is given by: Recall =

|Ma ∩ Ra | |Ma |

Finally, F1 measure has been suggested to combine recall and precision with an equal weight. Higher values indicate better recommendation. It is defined in terms of the other two metrics as follows: F1 =

2 × Precision × Recall Recall + Precision

During the simulation experiments, for each session of the test set we considered all possible sub-sessions including a number of pages between 2 and 6. For each sub-session representing the active session, we determined the recommendations through the neuro-fuzzy inference process. Hence, the average values for precision, recall and F1 measures were calculated. To evaluate the goodness of the obtained values, we compared these values with those obtained by using other recommendation approaches. In particular, we implemented three different approaches that were proposed in literature: – The Nearest Profile based recommendation approach (NP) which is based on using the profile vector retained closest to a given session on the basis of a similarity measure (in this case the cosine measure) [73]. – The K Nearest Neighbors (KNN) followed by top-N recommendations. In this approach, given a session, the closest K sessions are found. Then, the URLs present in these K sessions are sorted in decreasing order of their relevance, and the top-N Urls are treated as recommendation set [33]. – The recommendation approach based on the fuzzy approximate reasoning (FAR) proposed by Nasraoui and Petenes [73]. For this approach we implemented two variants on the basis of the type of operators used for the t-norm/intersection and t-conorm/union in the composition of the recommendation inference procedure: max-min (FAR_SP) and sum-product (FAR_MM). The average values obtained by using the different recommendation approaches for precision, recall and F1 measure are shown in Figs 3, 4, and 5, respectively. It can be seen that recommendations generated by the neuro-fuzzy approach are better than those obtained with the other implemented approaches, especially in correspondence to higher sub-session sizes. The better performance for longer sub-sessions is due to the fact that longer sessions match more likely with more than

268

G. Castellano et al. / Computational Intelligence techniques for Web personalization

Fig. 3. Comparison of average precision per sub-session size for five recommendation approaches.

Fig. 4. Comparison of average recall per sub-session size for five recommendation approaches.

Fig. 5. Comparison of average F1 measure per sub-session size for five recommendation approaches.

G. Castellano et al. / Computational Intelligence techniques for Web personalization

one session category. In this situation, a neuro-fuzzy approach is expected to be more effective, because it enables a user session to belong to several categories with different membership degrees.

6. Conclusions This paper has presented an overview of recent approaches to Web Personalization that employ CI techniques. This work emphasizes how most of Web personalization applications developed so far are based on combinations of CI techniques. In particular, hybrid approaches that synergistically combine neural networks and fuzzy logic, or neural networks and genetic algorithms, clustering and fuzzy logic or genetic algorithms and rule extraction have been successfully applied to accomplish personalization tasks. As an example of hybrid approaches for Web personalization, we have described a neuro-fuzzy Web personalization system that provides dynamic predictions about Web pages to be suggested to the user visiting a Web site. This hybrid system discovers user session categories from Web data through fuzzy clustering and derives fuzzy association rules for link suggestion through neural learning. Comparative simulation results show that the neuro-fuzzy approach overcomes other standard methods in terms of quality of the derived recommendations. This advises the potential that hybrid CI approaches may have in Web personalization, opening new research directions within the area of Computational Web Intelligence.

References [1] A. Abraham, Business intelligence from Web usage mining, Journal of Information & Knowledge Management 2(4) (2003), 375–390. [2] S.S. Anand and B. Mobasher, Intelligent Techniques for Web Personalization, in: Lecture Notes in Computer Science, B. Mobasher and S.S. Anand, eds, 3169, 2005, pp. 1–37. [3] S. Araya, M. Silva and R. Weber, A methodology for web usage mining and its application to target group identification, Fuzzy Sets and Systems 148 (2004), 139–152. [4] L. Ardissono and A. Goy, Tailoring the interaction with users in electronic shops, in: Proceedings of the 7th International Conference on User Modeling, UM99, Banff, Canada, 1999, pp. 35–44. [5] A. Banerjee and J. Ghosh, Clickstream clustering using weighted longest common subsequences, in: Proceedings of the Web Mining Workshop at the 1st SIAM Conference on Data Mining, 2001.

269

[6] J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, New York, Plenum Press, 1981. [7] S. Bidel, L. Lemoine and F. Piat, Statistical machine learning for tracking hypermedia user behavior, in: Proceedings of the 2nd Workshop on Machine Learning, Information Retrieval and User Modeling, 9th International Conference in User Modeling, 2003, pp. 56–65. [8] M. Boughanem, C. Chrisment, J. Mothe, C.S. Dupuy and L. Tamine, Connectionist and genetic approaches for information retrieval, in: Soft Computing in Information Retrieval: Techniques and Applications, F. Crestani and G. Pasi, eds, Vol. 50, Heidelberg, Germany, Physica-Verlag, 2000, 102– 121. [9] J. Callan, A. Smeaton, M. Beaulieu, P. Borlund, P. Brusilovsky, M. Chalmers, et al., Personalization and recommender systems in digital libraries, in: Proc. of the 2nd DELOS Workshop on Personalization and Recommender Systems in Digital Libraries, 2001. [10] G. Castellano, A.M. Fanelli and M.A. Torsello, A Web personalization system based on a neuro-fuzzy strategy, in: Proc. of the 7th Asian Pacific Industrial Engineering and Management Systems Conference and The 9th Asia Pacific Regional Meeting of International Foundation for Production Research (APIEMS 2006), December 17–20, 2006, Bangkok, Thailand. [11] G. Castellano, C. Castiello, A.M. Fanelli and C. Mencar, Knowledge discovering by a neuro-fuzzy modelling framework, Fuzzy Sets and Systems 149 (2005), 187–207. [12] W.-C. Chang, C.-S. Wu and C. Chang, Optimizing dynamic Web service component composition by using evolutionary algorithms, in: Proceedings of the 2005 IEEE/WIC/ACM International Conference on Web Intelligence, Sept. 2005, pp. 708–711. [13] S.W. Changchien and T. Lu, Mining association rules procedure to support on-line recommendation by customers and products fragmentation, Expert Systems with Applications 20 (2001), 325–335. [14] Y.H. Cho and J.K. Kim, Application of Web usage mining and product taxonomy to collaborative recommendations in ecommerce, Expert Systems with Applications 26 (2004), 233– 246. [15] R. Cooley, Web Usage Mining: Discovery and Application of Interesting Patterns from Web Data, Ph.D. thesis, University of Minnesota, 2000. [16] R. Cooley, B. Mobasher and J. Srivastava, Data preparation for mining WorldWideWeb browsing patterns, Journal of Knowledge and Information Systems 1(1) (1999), 55–32. [17] R. Cooley, P.N. Tan and J. Srivastava, Discovering of Interesting Usage Patterns from Web Data, TR 99-022, University of Minnesota, 1999. [18] F. Crestani and G. Pasi, Soft Computing in Information Retrieval: Techniques and Application, Vol. 50, Heidelberg, Germany, Physica-Verlag, 2000. [19] Y. Dong, X. Xiaoying Tai and J. Zhao, A distributed algorithm based on competitive neural network for mining frequent patterns, in: Proc. of International Conference on Neural Networks and Brain, Oct. 2005 (ICNN&B ’05), Vol. 1, 2005, pp. 499–503. [20] A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing, Natural Computing Series, Springer, 2003. [21] M. Eirinaki and M. Vazirgiannis, Web mining for web personalization, ACM TOIT 3(1) (2003), 2–27.

270

G. Castellano et al. / Computational Intelligence techniques for Web personalization

[22] N. Eiron and K.S. McCurley, Untangling compound documents on the Web, in: Proceedings of ACM Hypertext, 2003, pp. 85–94. [23] O. Etzioni, The world wideWeb: Quagmire or gold mine, Communications of the ACM 39(11) (1996), 65–68. [24] O. Etzioni and O. Zamir, Web document clustering: A feasibility demonstration, in: Proc. 21st Annu. Int. ACM SIGIR Conf., 1998, pp. 46–54. [25] F.M. Facca and P.L. Lanzi, Mining interesting knowledge from weblogs: A survey, Data & Knowledge Engineering 53 (2005), 225–241. [26] W. Fan, M.D. Gordon and P. Pathak, Personalization of search engine services for effective retrieval and knowledge management, in: Proceedings of the 21st International Conference on Information Systems, 2000, pp. 20–34. [27] E. Frias-Martinez, G. Magoulas, S. Chen and R. Macredie, Modeling human behavior in user-adaptive systems: Recent advances using soft computing techniques, Expert Systems with Applications 29(2) (2005), 320–329. [28] D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Reading, MA, Addison-Wesley, 1989. [29] N. Golovin and E. Rahm, Reinforcement learning architecture for Web recommendations, in: Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC’04), Vol. 1, 2004, pp. 398–402. [30] M. Gordon, W. Fan and P. Pathak, Adaptive Web search: evolving a program that finds information, IEEE Intelligent Systems and Their Applications 21(5) (2006), 72–77. [31] A. Gyenesei, A Fuzzy Approach for Mining Quantitative Association Rules, Univ. Turku, Dept. Comput. Sci., Lemminkisenkatu 14, Finland, TUCS Tech. Rep. 336, 2000. [32] G. Greco, S. Greco and E. Zumpano, Web communities: Models and algorithms, World Wide Web 7(1) (2004), 58–82. [33] R. Haapanen, R.E. Alan, M.E Bauer and A.O. Finley, Delineation of forest/nonforest land use classes using nearest neighbor methods, Remote Sensing of Environment 89 (2004), 265–271. [34] J. Han and M. Kamber, Data Mining Concepts and Techniques, Morgan Kaufmann, 2001. [35] S. Haykin, Neural Networks, 2nd edn, New York, Prentice Hall, 1999. [36] J. Heer and E.H. Chi, Mining the structure of user activity using cluster stability, in: Proceedings of the Workshop on Web Analytics, Second SIAM Conference on Data Mining, ACM Press, 2002. [37] L. Hildebrand, Hybrid Computational Intelligence Systems for Real World Applications, in: Book Series Studies in Fuzziness and Soft Computing, Springer Berlin, Heidelberg, Vol. 179, 2005, pp. 165–195. [38] J. Horng, and C. Yeh, Applying Genetic Algorithms to Query Optimization in Document Retrieval, Information Processing & Management 36(1) (2000), 737–759. [39] X. Huang, N. Cercone and A. An, Comparison of interestingness functions for learning web usage patterns, in: Proceedings of the Eleventh International Conference on Information and Knowledge Management, ACM Press, 2002, pp. 617– 620. [40] J.Z. Huang, M. Ng, W.-K. Ching, J. Ng and D. Cheung, A cube model and cluster analysis for web access sessions, in: WEBKDD 2001—Mining Web Log Data Across All

[41]

[42]

[43]

[44]

[45]

[46]

[47] [48]

[49]

[50]

[51]

[52]

[53]

[54]

[55]

Customers Touch Points, Third International Workshop, San Francisco, CA, USA, August 26, 2001, R. Kohavi, B. Masand, M. Spiliopoulou and J. Srivastava, eds. Revised papers: Lecture Notes in Computer Science, Vol. 2356, Springer, 2002, pp. 48–67. S.E. Jespersen, J. Thorhauge and T.B. Pedersen, A hybrid approach to web usage mining, in: Proceedings of the 4th International Conference on Data Warehousing and Knowledge Discovery, Springer-Verlag, 2002, pp. 73–82. K.P. Joshi, A. Joshi and, Y. Yesha, On using a warehouse to analyze web logs, Distributed and Parallel Databases 13(2) (2003), 161–180. A. Joshi and R. Krishnapuram, Robust fuzzy clustering methods support web mining, in: Proc. Workshop in Data Mining and Knowledge Discovery, SIGMOD 1998, 1998, pp. 15-1– 15-8. T. Kamdar and A. Joshi, On Creating AdaptiveWeb Sites usingWebLogMining, Technical Report TR-CS-00-05, Department of Computer Science and Electrical EngineeringUniversity of Maryland, Baltimore County, 2000. D.-W. Kim and K.H. Lee, A New Fuzzy Information Retrieval System Based on User Preference Model, FUZZ-IEEE, 2001, 127-130. S. Kim and B.T. Zhang, Web document retrieval by genetic learning of importance factors for html tags, in: Proc. Int. Workshop Text Web Mining, Melbourne, Australia, 2000, pp. 13–23. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic. Theory and Applications, New York, Prentice Hall, 1995. R. Kosala and H. Blockeel, Web mining research: A survey, SIGKDD: SIGKDD explorations: Newsletter of the special interest group (SIG) on knowledge discovery & data mining, ACM 2(1) (2000), 1–15. D.H. Kraft, J. Chen, M.J. Martin-Bautista and M.A. Vila, Textual Information Retrieval with User Profiles Using Fuzzy Clustering and Inferencing, in: Intelligent Exploration of the Web, P.S. Szczepaniak, J. Segovia, J. Kacprzyk and L.A. Zadeh, eds, Heidelberg, Germany, Physica-Verlag, 2002. R. Krishnapuram, A. Joshi, O. Nasraoui and L. Yi, Low complexity fuzzy relational clustering algorithms for web mining, IEEE Transactions on Fuzzy Systems 9(4) (2001), 595–608. R. Krishnapuram, A. Joshi and L.Yi, A fuzzy relative of the kmedoids algorithm with application to document and snippet clustering, in: Proc. IEEE Int. Conf. Fuzzy Syst., 1999. R.J. Kuo and J.A. Chen, A decision support system for order selection in electronic commerce based on fuzzy neural network supported by real-coded genetic algorithm, Expert Systems with Applications 26 (2004), 141–154. T. Lampinen and H. Koivisto, Profiling network applications with fuzzy c-means clustering and self-organising map, in: Proceedings of the 1st International Conference on Fuzzy Systems and Knowledge Discovery: Computational Intelligence for the E-Age, 2002, pp. 300–304. P. Langley, User modeling in adaptive interfaces, in: Proceedings of the Seventh International Conference on User Modeling, Banff, Canada, 1999, pp. 357–370. R.S.T. Lee, i JADE IWShopper: A new age of intelligent web shopping system based on fuzzy-neuro agent technology, Web intelligence: Research and development, in: Lecture Notes in Artificial Intelligence, N. Zhong, Y. Yao, J. Liu and S. Ohsuga, eds, Vol. 2198, 2001, pp. 403–412.

G. Castellano et al. / Computational Intelligence techniques for Web personalization [56] W. Lee and T. Tsai, An interactive agent-based system for concept based web search, Expert Systems with Applications 24 (2003), 365–373. [57] C.T. Lin and C.S. Lee, Neural Fuzzy Systems — A Neuro– Fuzzy Synergism to Intelligent Systems, Prentice-Hall, Englewood Cliffs, NJ, 1996. [58] G. Linden, B. Smith and J. York, Amazon.com recommendations: Item-to-item collaborative filtering, IEEE Internet Computing 7(1) (2003), 76–80. [59] V. Loia and P. Luongo, An evolutionary approach to automatic web page categorization and updating, in: Web Intelligence: Research and Development, N. Zhong, Y. Yab, J. Liu and S. Oshuga, eds, LNCS Vol. 2198, Singapore, SpringerVerlag, 2001, pp. 292–302. [60] G.D. Magoulas, K.A. Papanikolau and M. Grigoriadou, Neurofuzzy synergism for planning the content in a web-based course, Informatica 25(1) (2001), 39–48. [61] E. Menasalvas, S. Millan, J. Pena, M. Hadjimichael and O. Marban, Subsessions: a granular approach to click path analysis, in: Proceedings of FUZZ-IEEE Fuzzy Sets and Systems Conference, at the World Congress on Computational Intelligence, Honolulu, HI, 2002, pp. 12–17. [62] K. Menon and C.H. Dagli, Web personalization using neurofuzzy clustering algorithms, in: Proc. of 22nd International Conference of the North American Fuzzy Information Processing Society 2003 (NAFIPS 2003), 525–529. [63] H. Min, T. Smolinski and G. Boratyn, A GA-based data mining approach to profiling the adopters of E-purchasing, in: Proceedings of the 2001 IEEE 3rd International Conference on Information Reuse and Integration, 2001, pp. 1–6. [64] S. Mitra and S.K. Pal, Fuzzy multi-layer perceptron, inferencing and rule generation, IEEE Trans. Neural Networks, 6 (1995), 51–63. [65] B. Mobasher, R. Cooley and J. Srivastava, Automatic personalization based on Web usage mining, Communications of the ACM, 43(8) (2000), 142–151. [66] B. Mobasher, H. Dai, T. Luo and M. Nakagawa, Effective personalizaton based on association rule discovery from Web usage data, ACM Workshop on Web information and data management, Atlanta, GA, 2001. [67] B. Mortazavi-Asl, Discovering and Mining User Web-Page Traversal Patterns, Master thesis, Simon Fraser University, 2001. [68] M. Mulvenna, S. Anand and A. Buchner, Personalization on the net using web mining, CACM 43(8) (2000), 123–125. [69] A. Nanopoulos, D. Katsaros and Y. Manolopoulos, Exploiting web log mining for web cache enhancement, in: WEBKDD 2001—Mining Web Log Data Across All Customers Touch Points, Third International Workshop, San Francisco, CA, USA, August 26, 2001, R. Kohavi, B. Masand, M. Spiliopoulou and J. Srivastava, eds. Revised papers: Lecture Notes in Computer Science, Vol. 2356, Springer, 2002, pp. 68–87. [70] O. Nasraoui, World Wide Web Personalization, in: Encyclopedia of Data Mining and Data Warehousing, J. Wang, ed, Idea Group, 2005. [71] O. Nasraoui and R. Krishnapuram, Extracting web user profiles using relational competitive fuzzy clustering, International Journal on Artificial Intelligence Tools 9(4) (2000), 509–526.

271

[72] O. Nasraoui, R. Krishnapuram, A. Joshi and T. Kamdar, Automatic web user profiling and personalization using robust fuzzy relational clustering, in: E-Commerce and Intelligent Methods, Springer-Verlag, 2002. [73] O. Nasraoui and C. Petenes, Combining web usage mining and fuzzy inference for website personalization, in: Proceedings of WEBKDD 2003: Web Mining as Premise to effective Web Applications, 2003, pp. 37–46. [74] O. Nasraoui and M. Pavuluri, Accurate web recommendations based on profile-specific url-predictor neural networks, in: Proc. of the 13th International World Wide Web Conference on Alternate Track Papers & Posters, May 2004, pp. 300–301. [75] D. Nauck, Using symbolic data in neuro-fuzy classification, in: Proc. NAFIPS’99, New York, June 1999, pp. 536–540. [76] D. Nauck, F. Klawonn and R. Kruse, Foundations of Neuro– Fuzzy Systems, Wiley, Chichester, U.K., 1997. [77] Z.Z. Nick and P. Themis, Web search using a genetic algorithm, IEEE Internet Computing 5(2) (Mar/Apr 2001), 18–26. [78] S. Pal, V. Talwar and P. Mitra, Web Mining in soft computing framework: Relevance, state of the art and future directions, IEEE Transactions on Neural Networks 13(5) (2002), 1163– 1177. [79] G. Paliouras, C. Papatheodorou, V. Karkaletsis, P. Tzitziras and C.D. Spyropoulos, Large-Scale mining of usage data on Web sites, in: AAAI Spring Symposium on Adaptive User Interfaces, Stanford, California, 2000, pp. 92-97. [80] J. Pei, J. Han, B. Mortazavi-asl and H. Zhu, Mining access patterns efficiently from web logs, in: Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000, pp. 396–407. [81] M. Perkowitz and O. Etzioni, Adaptive sites: Automatically learning from user access patterns, Technical Report UWCSE-97-03-01, University of Washington, 1997. [82] D. Pierrakos, G. Paliouras, C. Papatheodorou and C.D. Spyropoulos, Web usage mining as a tool for personalization: A survey, User Modeling and User-Adapted Interaction 13(4) (2003), 311–372. [83] J. Pitkow and K. Bharat, WEBVIZ: ATool forWorldWideWeb Access LogVisualization, in: Proceedings of the 1st International World-Wide Web Conference, Geneva, Switzerland, 1994, 271–277. [84] W. Pohl, Learning about the User Modeling and Machine Learning, in: International Conference on Machine Learning Workshop Machine Learning meets Human-Computer Interaction, V. Moustakis and J. Herrmann, eds, 1996, pp. 29–40. [85] B. Prasad, HYREC: A Hybrid Recommendation System for E-Commerce, in: Proc. of the 6th Int. Conference on CaseBased Reasoning (ICCBR), 2005. [86] T.H. Roh, K.J. Oh and I. Han, The collaborative filtering recommendation base don SOM cluster-indexing CBR, Expert Systems with Applications 25 (2003), 413–423. [87] C. Romero, S. Ventura and P. de Bra, Discovering prediction rules in AHA! Courses, in: Proceedings of the 9th International Conference on User Modeling, LNAI 2702, 2003, pp. 25–34. [88] K.P. Sankar, T. Varun and M. Pabitra, Web mining in soft computing framework: Relevance, state of the art and future directions, IEEE Transaction on Neural Networks 13(5) (2002), 1163–1177.

272

G. Castellano et al. / Computational Intelligence techniques for Web personalization

[89] F. Scarselli, S.L. Yong, M. Gori, M. Hagenbuchner, M., A.C. Tsoi and M. Maggini, Graph neural networks for ranking Web pages, in: Proceedings of the 2005 IEEE/WIC/ACM International Conference on Web Intelligence, 19–22 Sept. 2005, pp. 666–672. [90] B. Schafer, J.A. Konstan and J. Riedl, E-commerce recommendation applications, Data Mining and Knowledge Discovery 5(12), 2001, 115–152, Kluwer Academic Publishers. [91] C. Schmitt, D. Dengler and M. Bauer, Multivariate preference models and decision making with the MAUT machine, in: Proceedings of the 9th International Conference on User Modeling, Lecture Notes in Artificial Intelligence, Vol. 2702, 2003, pp. 297–302. [92] H.P. Schwefel, Evolution and Optimum Seeking, New York, Wiley, 1995. [93] Y.-W. Seo and B.-T. Zhang, Learning user’s preferences by analyzing Web-browsing behaviors, in: Proceedings of the Fourth International Conference on Autonomous Agents, June 2000, Barcelona, Spain, 2000, pp. 381–387. [94] C. Shahabi, F. Banaei-Kashani and J. Faruque, A reliable efficient and scalable system for Web usage data acquisition, in: WebKDD’01Workshop in Conjunction with the ACMSIGKDD 2001, San Francisco, CA, August, 2001. [95] J. Shavlik and T. Eliassi, A system for building intelligent agents that learn to retrieve and extract information, in: Int. J. User Modeling User Adapted Interaction (Special Issue on User Modeling and Intelligent Agents), 2001. [96] K. Shin and Y. Lee, A genetic algorithm application in bankruptcy prediction modelling, Expert Systems with Applications 23 (2002), 321–328. [97] N. Spiliopoulou and L.C. Faulstich, WUM: A Web Utilization Miner, in: International Workshop on the Web and Databases, Valencia, Spain, LNCS, Vol. 1590, Springer, 1998, pp. 109– 115. [98] M. Spiliopoulou, Tutorial: Data Mining for the Web. PKDD’99. Prague, Czech Republic, 1999. [99] R. Stathacopoulou, M. Grigoriadou and G.D. Magoulas, A neurofuzzy approach in student modeling, in: Proceeding of the 9th International Conference on User Modeling, UM2003, Lecture Notes in Artificial Intelligence, Vol. 2702, 2003, pp. 337–342. [100] B.S. Suryavanshi, N. Shiri and S.P. Mudur, An efficient technique for mining usage profiles using relational fuzzy sub-

[101]

[102]

[103]

[104] [105]

[106]

[107]

[108]

[109] [110]

[111]

[112]

tractive clustering, in: Proc. of the 2005 Int. Workshop on Challenges in Web Information Retrieval and Integration (WIRI’05), 2005, pp. 23–29. K. Tajima, K. Hatano, T. Matsukura, R. Sano and K. Tanaka, Discovery and retrieval of logical information units in Web, in: Proceedings of the Workshop on Organizing Web Space (WOWS 99), 1999, 13–23, Berkeley, USA, August. P.N. Tan and V. Kumar, Discovery of Web Robot Sessions Based on their Navigational Patterns, Data Mining and Knowledge Discovery 6(1) (2002), 9–35. A. Tsakonas, G. Dounias, I.P. Vlahavas and C.D. Spyropoulos, Hybrid computational intelligence schemes in complex domains: An extended review, Lecture Notes in Computer Science 2308 (2002), 494–511. A. Vakali, J. Pokorný and T. Dalamagas, An Overview of Web Data Clustering Practices, EDBT Workshops, 2004, 597–606. G.I. Webb, M.J. Pazzani and D. Billsus, Machine learning for user modeling, User Modeling and User-Adapted Interaction, 11 (2001), 19–29, Kluwer. S.S.C. Wong and S. Pal, Mining fuzzy association rules for web access case adaptation, in: Workshop on Soft Computing in Case-Based Reasoning, International Conference on CaseBased Reasoning (ICCBR_Ol), 2001. Y. Xie and V.V. Phoha, Web user clustering from access log using belief function, in: Proceedings of the First International Conference on Knowledge Capture (K-CAP 2001), ACM Press, 2001, pp. 202–208. T.W. Yan, M. Jacobsen, H. Garcia-Molina and U. Dayal, From user access patterns to dynamic hypertext linking, WWW5/Computer Networks 28(7–11) (1996), 1007–1014. J. Yan, M. Ryan and J. Power, Using Fuzzy Logic, New York, Prentice Hall, 1994. L.A. Zadeh, From Computing with numbers to computing with words – from manipulation of measurements to manipulation of perceptions, IEEE Transactions on Circuits and Systems 45 (1999), 105–119. L.A. Zadeh, Web Intelligence, in: WK and FL-Studies in Fuzziness and Soft Computing, Springer, Vol. 164, 2005, pp. 1–18. Y.-Q. Zhang, A. Kandel, T.Y. Lin and Y.Y. Yao, Computational Web Intelligence: Intelligent technology for web applications, in: Series in Machine Perception and Artificial Intelligence, Vol. 58, World Scientific, 2004.